You are currently browsing the category archive for the ‘SQL’ category.

On of the problems when working with spatial data is figuring out what to GROUP BY when aggregating data spatially. I’ll talk more about this in a future post and, for one of the approaches I tried, I needed the ability to draw an arbitrary geography polygon (square in my case) that is yay long and wide.

Obviously with the geography type, you can’t just add n units to the x/y axes as you need geographic co-ordinates (latitude and longitude). I needed a function were I could find out the co-ordinates of a point, say, 500 metres north of the current position.

I turned to Chris Veness’ excellent source of spatial scripts (http://www.movable-type.co.uk/scripts/latlong.html) and cribbed the rhumb line function, converting it into T-SQL.

The function takes start point, a bearing (in degrees) and a distance in metres – returning a new point.

Please note that this function is hard coded for WGS84 (srid 4326) and may need tweaking to get the earths radius out of sys.spatial_reference_systems or changed to suit your requirement.

Disclaimer: This code is provided as-is with no warranties and suits my purpose, but I can’t guarantee that it will work for you and the missile guidance system that you may be building.

Simon Munro

@simonmunro

CREATE FUNCTION [dbo].[RumbLine]
(
@start GEOGRAPHY,
@bearing FLOAT,
@distance FLOAT
)
RETURNS GEOGRAPHY
AS
BEGIN
--Rumb line function created by Simon Munro
--Original post at Original post at
--http://simonmunro.com/2010/10/13/rumb-line-function-for-sql-server
--Algorithm thanks to
--http://www.movable-type.co.uk/scripts/latlong.html
--Hard coded for WGS84 (srid 4326)
DECLARE @result GEOGRAPHY;
DECLARE @R FLOAT = 6378137.0;
DECLARE @lat1 FLOAT, @lon1 FLOAT;
DECLARE @lat2 FLOAT, @lon2 FLOAT;

SET @distance = @distance/@R;  
SET @bearing = RADIANS(@bearing);
SET @lat1 = RADIANS(@start.Lat);
SET @lon1 = RADIANS(@start.Long);

SET @lat2 = ASIN(SIN(@lat1)*COS(@distance) + COS(@lat1)*SIN(@distance)*COS(@bearing));

SET @lon2 = @lon1 + ATN2(SIN(@bearing)*SIN(@distance)*COS(@lat1), COS(@distance)- SIN(@lat1)*SIN(@lat2));
SET @lon2 = CONVERT(DECIMAL(20,8), (@lon2+3*PI())) % CONVERT(DECIMAL(20,8),(2*PI())) - PI();

DECLARE @resultText VARCHAR(MAX);
SET @resultText = 'POINT('+CONVERT(VARCHAR(MAX),DEGREES(@lon2)) +' '+ CONVERT(VARCHAR(MAX), DEGREES(@lat2))+')';
SET @result = geography::STGeomFromText(@resultText, 4326)

RETURN @result;
END;

An investigation triggered by the lack of support of spatial data in SQL Azure has left me with the (unconfirmed) opinion that although requested by customers, the support of spatial data in SQL Azure may not be good enough to handle the requirements of a scalable solution that has mapping functionality as a primary feature.

Update: SQL Azure now has spatial support. The arguments made in this post are still valid and use of spatial features in SQL Azure should be carefully considered.

I have been asked to investigate the viability of developing a greenfields application in Azure as an option to the currently proposed traditional hosting architecture.  The application is a high load, public facing, map enabled application and the ability to do spatial queries is on near the top of the list of absolute requirements.  The mapping of features from the traditionally hosted architecture is fine until reaching the point of SQL 2008’s spatial types and features which are unsupported under SQL Azure – triggering further investigation.

It would seem that the main reason why spatial features are not supported in SQL Azure is because those features make use of functions which run within SQLCLR, which is also unsupported in SQL Azure.  The lack of support for SQLCLR is understandable to a a degree due to how SQL Azure is setup – messing around with SQLCLR on multitenant databases could be a little tricky.

The one piece of good news is that some of the assemblies used by the spatial features in SQLCLR are available for .NET developers to use and are installed into the GAC on some distributions (R2 amongst them) and people have been able to successfully make use of spatial types using SQL originated/shared managed code libraries.  Johannes Kebeck, the Bing maps guru from MS in the UK, has blogged on making use of these assemblies and doing spatial oriented work in Azure.

So far, it seems like there may be a solution or workaround to the lack of spatial support in SQL Azure as some of the code can be written in C#.  However, further investigation reveals that those assembles are only the types and some mathematics surrounding them and the key part of the whole process, a spatial index, remains firmly locked away in SQL Server and the inability to query spatial data takes a lot of the goodness out of the solution.

No worries, one would think – all that you need to do is get some view into the roadmap of SQL Azure support for SQL 2008 functionality and you can plan or figure it out accordingly.  After all, the on the Microsoft initiated, supported and sanctioned SQL Azure User voice website mygreatsqlazureidea.com, the feature ‘Support Spatial Data Types and SQLCLR’ comes out at a fairly high position five on the list with the insightful comment ‘Spatial in the cloud is the killer app for SQL Azure. Especially with the proliferation of personal GPS systems.’ The SQL Azure team could hardly ignore that observation and support – putting it somewhere up there on their product backlog.

When native support for spatial data in SQL Azure is planned is another matter entirely and those of us on the outside can only speculate.  You could ask Microsoft directly, indirectly or even try and get your nearest MVP really drunk and, when offered the choice between breaking their NDA and having compromising pictures put up on Facebook, will choose the former.

Update: You use your drunk MVP to try and glean other information as it was announced that SQL Azure will support spatial data in June 2010 http://blogs.msdn.com/sqlazure/archive/2010/03/19/9981936.aspx and  http://blogs.msdn.com/edkatibah/archive/2010/03/21/spatial-data-support-coming-to-sql-azure.aspx (see comments below).  This is not a solution to all geo-aware cloud applications, so I encourage you to read on.

I have n-th hand unsubstantiated news that the drastic improvements for spatial features in SQL 2008 R2 were made by taking some of the functionality out of SQLCLR functions and putting them directly into the SQL runtime which means that even a slightly deprecated version of SQL Azure based on R2, which I think is inevitable, would likely have better support for spatial data.

Update:  In the comments below, Ed Katibah from Microsoft, confirms that the spatial data support is provided by SQL CLR functionality and not part of the R2 runtime.

In assessing this project’s viability as an Azure solution, I needed to understand a little bit more about what was being sacrificed by not having SQL spatial support and am of the opinion that it is possibly a benefit.

Stepping back a bit, perhaps it is worthwhile trying to understand why SQL has support for spatial data in the first place.  After all, it only came in SQL 2008, mapping and other spatial applications have been around longer than that and, to be honest, I haven’t come across many solutions that use the functionality.  To me, SQL support of spatial data is BI Bling – you can, relatively cheaply (by throwing a table of co-ordinates against postal codes and mapping your organizations regions) have instant, cool looking, pivot tables, graphs, charts and other things that are useful in business.  In other words, the addition of spatial support adds a lot of value to existing data,  whose transactions do not really have a spatial angle.  The spatial result is a side effect of (say) the postal code, which is captured for delivery reasons rather than explicit BI benefits.

The ability to pimp up your sales reports with maps, while a great feature that will sell a lot of licences, probably belongs as a feature of SQL Server (rather than the reporting tool), I question the value of using SQL as the spatial engine for an application that has spatial functionality as a primary feature.  You only have to think about Google maps, streetview and directions with the sheer scale of the solution and the millions of lives it affects and ask yourself whether or not behind all the magic there is some great big SQL database serving up the data.  Without knowing or Googling the answer, I would suggest with 100% confidence that the answer is clearly ‘No’.

So getting back to my Azure viability assessment, I found myself asking the question.

If SQL Azure had spatial support, would I use it in an application where the primary UI and feature set is map and spatially oriented?

But before answering that I asked,

Would I propose an architecture that used SQL spatial features as the primary spatial data capability for a traditionally hosted application where the primary UI and feature set is map and spatially oriented?

The short answer to both questions is a tentative no.  Allow me to provide the longer answer.

The first thing to notice about spatial data is that things that you are interested in the location of don’t really move around much.  The directions from Nelsons Column to Westminster Abbey are not going to change much and neither are the points of interest along the way.  In business you have similar behaviour – customers delivery addresses don’t move around much and neither do your offices, staff and reporting regions.  The second thing about spatial data is the need to have indexes so that queries, such as the closest restaurants to a particular point, can be done against the data and spatial indexes solve this problem by providing tree like indexing in order to group together co-located points.  These indexes are multidimensional in nature and a bit more complex than the flatter indexes that we are used to with tabular data.

Because of the slow pace at which coastlines, rivers, mountains and large buildings move around, the need to have dynamically updated spatial data, and hence their indexes, is quite low.  So while algorithms exist to add data to spatial indexes, the cost of performing inserts is quite expensive, so in many cases indexes can be rebuilt from scratch whenever there is a bulk modification or insert of the underlying data.

So while SQL Server 2008 manages spatial indexes as with any other index, namely by updating the index when underlying data changes, I call into question the need for having such functionality for data that is going to seldom change.

If data has a low rate of change, spatial or not, it becomes a candidate for caching, and highly scalable websites have caching at the core of their solutions (or problems, depending on how much they have).  So if I were to scale out my solution, is it possible to cache the relatively static data and the spatial indexes into some other data store that is potentially distributed across many nodes of my network?  Unfortunately, unlike a simple structure like a table, the data within a spatial index (we are talking about the index here and not the underlying data) is wrapped up closely to the process or library that created it.  So, in the case of SQL Server, the spatial index is simply not accessible from anywhere other than SQL Server itself.  This means that I am unable to cache or distribute the spatial indexes unless I replicate the data to another SQL instance and rebuild the index on that instance.

So while I respect the functionality that SQL Server offers with spatial indexing, I question the value of having to access indexed data in SQL server just because it seems to be the most convenient place to access the required functionality (at least for a Microsoft biased developer).  If my application is map oriented (as opposed to BI bling), how can I be sure that I won’t run into a brick wall with SQL server with spatial indexes in particular.  SQL server is traditionally known as a bottleneck with any solution and putting my core functionality into that bottleneck, before I have even started and without much room to manoeuvre is a bit concerning.

I should be able to spin up spatial indexes wherever I want to and in a way that is optimal for a solution.  Perhaps I can have indexes that focus on the entire area at a high level and can generate lower level ones as required.  Maybe I can pre-populate some indexes for popular areas or if an event is going to take place in a certain area.  Maybe I am importing data points all of the time and don’t want SQL spending time churning indexes as data, which I am not interested in yet, is being imported.  Maybe I want to put indexes on my rich client so that the user has a lighting fast experience as they scratch around in a tiny little part of the world that interests them.

In short, maybe I want a degree of architectural and development control over my spatial data that is not provided my SQL’s monolithic approach to data.

This led me to investigating other ways of dealing with spatial data (generally), but more specifically spatial indexes.  Unsurprisingly there are a lot of algorithms and libraries out there that seem to have their roots in a C and Unix world.  The area of spatial indexing is not new and a number of algorithms have emerged as popular mechanisms to build spatial indexes.  The two most popular are R-Tree (think B-Tree for spatial data) and Quadtree (where a tree is built up by dividing areas into quadrants).

There is a wealth of information on these fairly well understood algorithms and event Microsoft’s own implementations do not fall far from these algorithms.  Bing maps uses ‘QuadKeys’ to index tiles, seemingly referring to the underlying Quadtree index.  (SQL Server is a bit different though, it uses a four level grid indexing mechanism that is non recursive and uses tessellation to set the granularity of the grid.)

So if all of this spatial data stuff is old hat, surely there are some libraries available for implementing your own spatial indexes in managed code?  It seems that there are some well used open source libraries and tools available.  Many commercial products and Sharpmap, an OSS GIS library, make use of NetTopologySuite, a direct port of the Java based JTS.  These libraries have a lot of spatial oriented functions, most of which only make vague sense to me, including a read only R-Tree implementation.

Also, while scratching around, I got the sense that Python has emerged as the spatial/GIS language of choice (it makes sense considering all those C academics started using Python).  It seems that there are a lot of Python libraries out there that are potentially useful within a .NET world using IronPython.

It is still early in my investigation, but I can’t help shaking the feeling that making use of SQL 2008 for spatial indexing because that is the only hammer that Microsoft provides is not necessarily the best solution.  This is based on the following observations:

  • Handling of spatial data is not new – it is actually a mature part of computer science.  In fact SQL server was pretty slow to implement spatial support.
  • An RDBMS like SQL or Oracle may be a good place to store data, but not necessarily the best place to have your indexes.  The SQL bias towards data consistency and availability are counter to the demands of spatial data and their indexes.
  • In order to develop a map oriented solution, a fine degree of control over spatial data may be required to deliver the required functionality at scale.

While I am not against OSS, evaluating libraries can be risky and difficult and I am stunned at the lack of support for spatial data in managed code coming out of Microsoft.  Microsoft needs to pay attention to the demand for support of spatial data for developers (not just database report writers).  The advent of always connected geo-aware mobile devices and their users’ familiarity with maps and satnav, will push the demand for applications that are supportive of geographic data.  It is not unlikely to picture the teenager demand for a map on their mobile devices that shows the real time location of their social network.

To support this impending demand, Microsoft needs to make spatial data a first class citizen of the .NET framework (system.spatial).  It wouldn’t take much, just get some engineers from SQL and Bing maps to talk to each other for a few weeks.  Microsoft, if you need some help with that, let me know.

In the meantime I will walk down the road of open source spatial libraries and let you know where that road leads.

Simon

@simonmunro

I like to ride motorbikes.  Currently I ride a BMW K1200S – a sports tourer that is both fast and comfortable on the road.  Before that I had a five year affair with a BMW R1150GS which took me to all sorts of off-the-beaten-track destinations before we abruptly parted company with me flying through the air in one direction as my bike was smashed in the other direction by criminals in a getaway car.

Most motorbike enthusiasts have, like me, owned a few in their lifetimes and in most cases they are of differing types.  A road bike, no matter how much you are prepared to spend, can barely travel faster than walking pace on a good quality dirt road because, apart from the obvious things like tyres and suspension, the geometry is all wrong.  The converse is similar – a good dirt bike is frustrating, dull and downright dangerous to ride on a road.

Bikers understand the issues around suitability for purpose and compromise more than most (such as car drivers).  Our lottery winning fantasies have a motorbike garage filled, not simply with classics or expense, but with a bike suitable for every purpose and occasion – track, off-road, touring, commuting, cafe racing and every other obvious niche.  Some may even want a Harley Davidson for the odd occasion that one would want to ride a machine that leaks more oil than fuel it uses and one would want to travel in a perfectly straight line for 200 yards before it overheats and the rider suffers from renal damage.

But I digress.  Harley Davidson hogs, fanbois (or whatever the collective noun is for Harley Davidson fans) can move on.  This post has nothing to do with you.

There is nothing in the motorbike world that is analogous to the broad suitability of the SQL RDBMS.  SQL spans the most simple and lightweight up to complex, powerful and expensive – with virtually every variation in between covered.  It is not just motorbikes, a lot of products out there would want such broad suitability – cars, aeroplanes and buildings.  SQL is in a very exclusive club of products that is solves such a broad range of the same problem, and in the case of SQL, that problem is data storage and retrieval.  Also SQL seems to solve this problem in a way that the relationships between load, volume, cost, power and expense is fairly linear.

SQL’s greatest remaining strength and almost industry wide ubiquity is that it is the default choice for storing and retrieving data.  If you want to store a handful of records, you might as well use a SQL database, not text files.  And if you want to store and process huge amounts of transactional data, in virtually all cases, a SQL database is the best choice.  So over time, as the demands and complexity of our requirements has grown, SQL has filled the gaps like sand on a windswept beach, and exclusively filled every nook and cranny.

We use SQL for mobile devices, we use SQL for maintaining state on the web, we use SQL for storing rich media, and use it to replicate data around the world.  SQL has, as it has been forced to satisfy all manner of requirements, been used, abused, twisted and turned and generally made to work in all scenarios.  SQL solutions have denormalization, overly complex and inefficient data models with thousands of entities, and tens of thousands of lines of unmaintainable database code. But still, surprisingly, it keeps on giving as hardware capabilities improve, vendors keep adding features and people keep learning new tricks.

But we are beginning to doubt the knee jerk implementation of SQL for every data storage problem and, at least at the fringes of its capabilities, SQL is being challenged.  Whether it be developers moving away from over-use of database programming languages, cloud architects realising that SQL doesn’t scale out very well, or simply CIO’s getting fed up with buying expensive hardware and more expensive licences, the tide is turning against SQL’s dominance.

But this post is not an epitaph for SQL, or another some-or-other-technology is dead post.  It is rather an acknowledgement of the role that SQL plays – a deliberate metronomic applause and standing ovation for a technology that is, finally, showing that it is not suitable for every conceivable data storage problem.  It is commendable that SQL has taken us this far, but the rate at which we are creating information is exceeding the rate at which we can cheaply add power (processing, memory and I/O performance) to the single database instance.

SQL’s Achilles heel lies in its greatest strength – SQL is big on locking, serial updates and other techniques that allow it to be a bastion for consistent, reliable and accurate data.  But that conservative order and robustness comes at a cost and that cost is the need for SQL to run on a single machine.  Spread across multiple machines, the locking, checking, index updating and other behind the scenes steps suffer from latency issues and the end result is poor performance.  Of course, we can build even better servers with lots of processors and memory or run some sort of grid computer, but then things start getting expensive – ridiculously expensive, as heavy metal vendors build boutique, custom machines that only solve today’s problem.

The scale-out issues with SQL have been known for a while by a small group of people who build really big systems.  But recently the problems have moved into more general consciousness by Twitter’s fail-whale, which is largely due to data problems, and the increased interest in the cloud by developers and architects of smaller systems.

The cloud, by design, tries to make use of smaller commodity (virtualized) machines and therefore does not readily support SQL’s need for fairly heavyweight servers.  So people looking at the cloud find that although there are promises that their application will port easily, are obviously asking how they bring their database into the cloud and finding a distinct lack of answers.  The major database players seem to quietly ignore the cloud and don’t have cloud solutions – you don’t see DB2, Oracle or MySQL for the cloud and the only vendor giving it a go, to their credit (and possibly winning market share), is Microsoft with SQL Server.  Even then, SQL Azure (the version of SQL Server that runs on Azure) has limitations, and size limitations that are indirectly related to the size of the virtual machine on which it runs.

Much is being made of approaches to get around the scale out problems of SQL and with SQL Azure in particular, discussions around a sharding approach for data.  Some of my colleagues were actively discussing this and it led me to weigh in and make the following observation:

There are only two ways to solve the scale out problems of SQL Databases

1. To provide a model that adds another level of abstraction for data usage (EF, Astoria)

2. To provide a model that adds another level of abstraction for more complicated physical data storage (Madison)

In both cases you lose the “SQLness” of SQL.

It is the “SQLness” that is important here and is the most difficult thing to find the right compromise for.  “SQLness” to an application developer may be easy to use database drivers and SQL syntax; to a database developer it may be the database programming language and environment; to a data modeller it may be foreign keys; to a DBA it may be the reliability and recoverability offered by transaction logs.  None of the models that have been presented satisfy the perspectives of all stakeholders so it is essentially impossible to scale out SQL by the definition of what everybody thinks a SQL database is.

So the pursuit of the holy grail of a scaled out SQL database is impossible.  Even if some really smart engineers and mathematicians are able to crack the problem (by their technically and academically correct definition of what a SQL database is), some DBA or developer in some IT shop somewhere is going to be pulling their hair out thinking that this new SQL doesn’t work the way it is supposed to.

What is needed is a gradual introduction of the alternatives and the education of architects as to what to use SQL for and what not to – within the same solution.  Just like you don’t need to store all of your video clips in database blob fields, there are other scenarios where SQL is not the only option.  Thinking about how to architect systems that run on smaller hardware, without the safety net of huge database servers, is quite challenging and is an area that we need to continuously discuss, debate and look at in more detail.

The days are the assumption that SQL will do everything for us is over and, like motorcyclists, we need to choose the right technology or else we will fall off.

Simon Munro

@simonmunro

Database sharding, as a technique for scaling out SQL databases, has started to gain mindshare amongst developers.  This has recently has been driven by the interest in SQL Azure, closely followed by disappointment because of the 10GB database size limitation, which in turn is brushed aside by Microsoft who, in a vague way, point to sharding as a solution to the scalability of SQL Azure.  SQL Azure is a great product and sharding is an effective (and successful) technique, but before developers that have little experience with building scalable systems are let loose on sharding (or even worse, vendor support for ‘automatic’ sharding), we need to spend some time understanding what the issues are with sharding, the problem that we are trying to solve, and some ways forward to tackle the technical implementation.

The basic principles of sharding are fairly simple.  The idea is to partition your data across two or more physical databases so that each database (or node) has a subset of the data.  The theory is that in most cases a query or connection only needs to look in one particular shard for data, leaving the other shards free to handle other requests.  Sharding is easily explained by a simple single table example.  Lets say you have a large customer table that you want to split into two shards.  You can create the shards by having all of the customers who’s names start with ‘A’ up to ‘L’ in one database and another for those from ‘M’ to ‘Z’, i.e. a partition key on the first character of the Last Name field.  With 13 characters in each shard you would expect to have an even spread of customers across both shards but without data you can’t be sure – maybe there are more customers in the first shard than the second, and maybe you particular region has more in one than the other. 

Lets say that you think that it will be better to shard customers by region to get a more even split and you have three shards; one for the US, one for Europe and one for the rest of the world.  Although unlikely, you may find that although the number of rows is even that the load across each shard differs.  80% of your business may come from a single region or even if the amount of business is even, that the load will differ across different times of the day as business hours move across the world.  The same problem exists across all primary entities that are candidates for sharding.  For example, your product catalogue sharding strategy will have similar issues.  You can use product codes for an even split, but you may find that top selling products are all in one shard.  If you fix that you may find that top selling products are seasonal, so today’s optimal shard will not work at all tomorrow.  The problem can be expressed as

The selection of a partition key for sharding is dependant on the number of rows that will be in each shard and the usage profile of the candidate shard over time.

Those are some of the issues just trying to figure out your sharding strategy – and that is the easy part.  Sharding seems to have a rule that the application layer is responsible for understanding how the data is split across each shard (where the term ‘partition’ is applied more to the RDBMS only and partitioning is transparent to the application).  This creates some problems:

  • The application needs to maintain an index of partition keys in order to query the correct database when fetching data.  This means that there is some additional overhead – database round trips, index caches and some transformation of application queries into the correctly connected database query.  While simple for a single table, it is likely that a single object may need to be hydrated from multiple databases and figuring out where to go and fetch each piece of data, dynamically (depending on already fetched pieces of data), can be quite complex.
  • Any sharding strategy will always be biased towards a particular data traversal path.  For example, in a customer biased sharding strategy you may have the related rows in the same shard (such as the related orders for the customer).  This works well because the entire customer object and related collections can be hydrated from a single physical database connection, making the ‘My Orders’ page snappy.  Unfortunately, although it works for the customer oriented traversal path, the order fulfilment path is hindered by current and open orders being scattered all over the place.
  • Because the application layer owns the indexes and is responsible for fetching data the database is rendered impotent as a query tool because each individual database knows nothing about the other shards and cannot execute a query accordingly.  Even if there was shard index availability in each database, then it would trample all over the domain of the application layers’ domain, causing heaps of trouble.  this means that all data access needs to go through the application layer , which create a lot of work to implement an object implementation of all database entities, their variations and query requirements.  SQL cannot be used as a query language and neither can ADO, OleDB or ODBC be used – making it impossible to use existing query and reporting tools such as Reporting Services or Excel.
  • In some cases, sharding may be slower.  Queries that need to aggregate or sort across multiple queries will not be able to take advantage of heavy lifting performed in the database.  You will land up re-inventing the wheel by developing your own query optimisers in the application layer.

In order to implement sharding successfully we need to deal with the following:

  1. The upfront selection of the best sharding strategy.  What entities do we want to shard?  What do we want to shard on? 
  2. The architecture and implementation of our application layer and data access layer.  Do we roll our own?  Do we use an existing framework?
  3. The ability to monitor performance and identify problems with the shards in order to change (and re-optimise) our initially chosen sharding strategy over time as the amount of data and usage patterns change over time.
  4. Consideration for other systems that may need to interface with our system, including large monolithic legacy systems and out-of-the-box reporting tools.

So some things to think about if you are considering sharding:

  • Sharding is no silver bullet and needs to be evaluated architecturally, just like any other major data storage and data access decision.
  • Sharding of the entire system may not be necessary.  Perhaps it is only part of the web front-end that needs performance under high load that needs to be sharded and the backoffice transactional systems don’t need to be sharded at all.  So you could build a system that has a small part of the system sharded and migrates data to a more traditional model (or data warehouse even) as needed.
  • Sharding for scalability is not the only approach for data – perhaps some use could be made of non-SQL storage.
  • The hand coding of all the application objects may be a lot of work and difficult to maintain.  Use can be made of a framework that assists or a code generation tool could be used.  However, it has to be feature complete and handle the issues raised in this post.
  • You will need to take a very careful approach to the requirements in a behavioural or domain driven style.  Creating a solution where every entity is sharded, every object is made of shards, and every possible query combination that could be thought up is implemented is going to be a lot of work and result in a brittle unmaintainable system.
  • You need to look at your database vendors’ support of partitioning.  Maybe it will be good enough for your solution and you don’t need to bother with sharding at all.
  • Sharding, by splitting data across multiple physical databases, looses some (maybe a lot) of the essence of SQL – queries, data consistency, foreign keys, locking.  You will need to understand if that loss is worthwhile – maybe you will land up with a data store that is too dumbed down to be useful.

If you are looking at a Microsoft stack specifically, there are some interesting products and technologies that may affect your decisions.  These observations are purely my own and are not gleaned from NDA sourced information.

  • ADO.NET Data Services (Astoria) could be the interface at the application level in front of sharded objects.  It replaces the SQL language with a queryable RESTful language.
  • The Entity Framework is a big deal for Microsoft and will most likely, over time, be the method with which Microsoft delivers sharding solutions.  EF is destined to be supported by other Microsoft products, such as SQL Reporting Services, SharePoint and Office, meaning that sharded EF models will be able to be queried with standard tools.  Also, Astoria supports EF already, providing a mechanism for querying the data with a non SQL language.
  • Microsoft is a pretty big database player and has some smart people on the database team.  One would expect that they will put effort into the SQL core to better handle partitioning within the SQL model.  They already have Madison, which although more read-only and quite closely tuned for specific hardware configurations, offers a compelling parallelised database platform.
  • The Azure platform has more than just SQL Azure – it also has Azure storage which is a really good storage technology for distributed parallel solutions.  It can also be used in conjunction with SQL Azure within an Azure solution, allowing a hybrid approach where SQL Azure and Azure Storage play to their particular strengths.
  • The SQL azure team has been promising some magic to come out of the Patterns & Practices team – we’ll have to wait and see.
  • Ayende seems to want to add sharding to nHibernate.

Database sharding has typically been the domain of large websites that have reached the limits of their own, really big, datacentres and have the resources to shard their data.  The cloud, with small commodity servers, such as those used with SQL Azure, has raised sharding as a solution for smaller websites but they may not be able to pull off sharding because of a lack of resources and experience.  The frameworks aren’t quite there and the tools don’t exist (like an analysis tool for candidate shards based on existing data) – and without those tools it may be a daunting task.

I am disappointed that the SQL Azure team throws out the bone of sharding as the solution to their database size limitation without backing it up with some tools, realistic scenarios and practical advice.  Sharding a database requires more than just hand waving and PowerPoint presentations and requires a solid engineering approach to the problem.  Perhaps they should talk more to the Azure services team to offer hybrid SQL Azure and Azure Storage architectural patterns that are compelling and architecturally valid.  I am particularly concerned when it is offered as a simple solution to small businesses that have to make a huge investment in a technology and and architecture that they are possibly unable to maintain.

Sharding will, however, gain traction and is a viable solution to scaling out databases, SQL Azure and others.  I will try and do my bit by communicating some of the issues and solutions – let me know in the comments if there is a demand.

Simon Munro

@simonmunro

More posts from me

I do most of my short format blogging on CloudComments.net. So head over there for more current blog posts on cloud computing

RSS Posts on CloudComments.net

  • Free eBook on Designing Cloud Applications
    Too often we see cloud project fail, not because of the platforms or lack of enthusiasm, but from a general lack of skills on cloud computing principles and architectures. At the beginning of last year I looked at how to address this problem and realised that some guidance was needed on what is different with […]
  • AWS and high performance commodity
    One of the primary influencers on cloud application architectures is the lack of high performance infrastructure — particularly infrastructure that satisfies the I/O demands of databases. Databases running on public cloud infrastructure have never had access to the custom-build high I/O infrastructure of their on-premise counterparts. This had led to the wel […]
  • Fingers should be pointed at AWS
    The recent outage suffered at Amazon Web Services due to the failure of something-or-other caused by storms in Virginia has created yet another round of discussions about availability in the public cloud. Update: The report from AWS on the cause and ramifications of the outage is here. While there has been some of the usual […]
  • Microsoft can do it without partners
    Microsoft’s biggest strength has always its partner network and it seemed, at least for a couple of decades, that a strong channel was needed to get your product into the market. Few remember the days where buyers only saw products in computer magazines, computer trade shows and the salespeople walking through the door — the […]
  • The significance of Linux VMs on Windows Azure
    One of the most significant, highly anticipated, and worst kept secrets of the Windows Azure spring release is the inclusion of persistent VMs, with the notable addition of support for Linux on those VMs. The significance of the feature is not that high architecturally — after all, Windows Azure applications that were specifically architected for […]

@simonmunro

Follow

Get every new post delivered to your Inbox.