I received my free Intel Galileo running Windows from the Windows Developer Program for IoT. Initial setup and the first sample was easy enough, although it is a bit weird running a telnet session to a Windows machine that runs on a device that feels a lot like an Arduino!

I have an Arduino servo board that I wanted to try, but it seems that the servo.h libraries haven’t been ported yet. If anyone has existing servo code for Windows on Galileo, I’d like to see it. I abandoned the Windows on Robots idea for now and picked up a different shield.

Recently, I have been playing with a Cooking Hacks e-Health Sensor Platform V2.0. It is an Arduino shield that allows a bunch of health sensors to be plugged in.

 

I have the shield as well as the pulsixometer and ECG sensor, which I am able to work with on the Arduino, and thought that I’d give it a try on the Windows Galileo board. To start, I downloaded the library for the e-Health Sensor Platform for Galileo – this is a library for a standard Galileo, not a Windows one, but would be a good place to start. I had a look at the source for the library and found that the pulsixometer doesn’t exist (which is not surprising as the implementation of the pulsixometer is poor – it reads led’s rather than getting integer values). The ECG API was simply a small calculation made on an analogue read. Even as a .net developer who has managed to avoid C++, I was able to implement it quite easily. All that the e-Sensor does is convert the ECG reading into a voltage from 0 to 5 volts to build the waveform.

The simple code looks like this:

void loop()
{
	float analog0;
	analog0 = analogRead(0);
 
	float ecg;
	ecg = (float)analog0 * 5 / 1023.0;
 
	Log(L"ECG: %lf\n", ecg);
 
	delay(10);
}

with debugger output…

Debugger Output

Here’s what the bits look like…

WP_20140803_10_22_26_Pro

 

Next step is to send that data up to Service Bus, but it will take a bit longer.

Simon Munro

@simonmunro

ALM (Application Lifecycle Management) means different things to different people, and these views are largely influenced by tool vendors. IBM users may bias their view of ALM to things that the Rational toolset is good at — say requirements traceability and Java-oriented modelling. Microsoft users may see ALM as being about using TFS (Team Foundation Server) — with Visual Studio integrated sprints, tasks and testing tools. Ruby developers may see ALM as being about distributed source control and behaviour driven development — such as using git and cucumber. (I say may because some of these toolsets and frameworks are very broad — more broad than most of their users are aware).

ALM is everything that is supported by those tools and frameworks — and more. Think, without referencing your favourite tools, about the lifecycle on an application. It starts off with an idea, hopefully gets developed and tested, is deployed to production, and supported and maintained for a few years until it is finally retired. Over that period there are a lot of people, processes, deliverables, expenses, plans and other things that need to be organised, utilised, directed, controlled, disposed of — well, managed, really. In that context pretty much everything is ALM.

Business have, over the years, generalised processes, made them more efficient, and developed specialised tools and skills. The application lifecycle that requires those processes would make use of the existing parts of the business. The obvious ones would be things like financial planning and control, human resources, risk and compliance, and project management. It may be contentious, especially with the public cloud, but established businesses have IT processes too — from operations and support, to capacity planning, security, and (enterprise) architecture. This starts narrowing the scope of what is left to deal with in understanding the ALM processes that are needed, as illustrated below:

 ALM-001

In addition, certain technology choices limit how you can manage the application lifecycle. I hesitated making this point, as determining the technology can be part of ALM — but ultimately there will be things that are beyond control and processes that need to be included rather than adapted. If you are developing an app to be deployed on iOS, for example, you have little choice but to manage the deployment of (part of) the app according to Apple’s rules. There are also lower level constraints based on the environment and availability of development skills, at least for most projects. An application developed for Windows Azure in a .NET team is going to coded in Visual Studio, C#, and use the .NET Azure SDK — there is not much that you can do about it apart from completely changing the technology choices, which is not always practical. These technology constraints on being able to define ALM processes is illustrated in the diagram below:

ALM-002

When it comes to understanding the need for ALM on the cloud there are two different scenarios — one for established enterprises and one for startups. With enterprises, there may be a lot of processes and technologies that support application management, but they may be totally irrelevant in the cloud. For example, the existing capex-oriented financial modelling is useless when looking at a pay-per-use pricing model. Years of effort and experience on specific technologies, such as running Oracle databases in an on-premise datacentre, is less applicable for cloud applications. The diagram below depicts the reduced overlap between existing process and technology choices and cloud specific ALM processes:

ALM-003

At the other end of the scale are new business ventures that have few existing business processes and little in the way of fixed technology choices. This means that there is a lot of work to do in terms of defining the cloud specific ALM processes. In a lot of software-oriented startups the distinction between business processes and software processes barely exists because everybody is defining, building, supporting and selling the software itself — the software is the business. If it is a cloud-based software startup, virtually everything is about cloud ALM (and it is fine not to call it that). This lack of existing processes is depicted in the diagram below where the overlap of processes and tools are smaller simply because none exist yet:

ALM-004

The reason for failure (or muted success) of cloud applications has been, and will continue to be for the next few years, a lack of skills in designing, building and operating cloud applications. When looking at the problem in more detail, it is not that people are unskilled in general, they just don’t know how to adapt their skills to a new environment. When we looked at this problem last year, we felt that developing cloud specific skills is not about telling people “This is how you develop cloud applications”, but rather “You know how to develop applications, and this is what you need to do differently on the cloud”. The basis for this method of explaining is to assume application development skills and assume that the business already has some ALM processes (whether formal or not) and hooking into those skills and processes.

The result was a book that I wrote and published –  “CALM – Cloud Application Lifecycle Management”, which looks at what is different in the cloud from the context of various models. Some of these models deal with upfront processes, such as defining the usage lifecycles (lifecycle model). Some deal with overall processes, such as the cost model. Most deal with fundamental design decisions, such as the availability and data models. There are also models that are important to longer-term success of the application, such as the health and operational models.

CALM is licensed as open source, which also means that it is free to download, read and use. It is available on github at github.com/projectcalm/Azure-EN, with pdf, mobi (Kindle), and raw html available for download on this share. A print version of the book is also available for purchase on Lulu.

CALM forces implementation teams to ask and answer some difficult questions that are important to successful delivery. I encourage you to have a look at CALM, let others know about it, ask any questions, and give me some feedback on how it can be made better.

Simon Munro

@simonmunro

As part of an availability model that I am working on, I got stuck right at the beginning when trying to find a definition that fits. So I went back to base principles to try and decompose what is meant by availability. This is a conceptual view, and separate from the measurement of availability (‘nines’ malarky). Have a look at it and give me some input so that I can refine it further.

Simon Munro

@simonmunro


Availability is a term that is so widely used in different contexts that it is very difficult to define in a way that satisfies all audiences. At its most basic, availability is the ability of a system to provide the expected functionality to its users. The expected functionality means that the application needs to be responsive (not frustrating users by taking too long too respond), as well as being able to reliably be able to perform those functions. But that is not enough to understand the full story about availability.

Availability is simplistically viewed as binary — the application is either available at a point in time, or it is not. This leads to a misunderstanding of availability targets (the ‘nines of availability’), the approaches to improving availability and the ability of salespeople to sell availability snake oil off the shelf (see 100% availability offered by Rackspace).

Application availability is influenced by something and has a visible outcome for the consumer, as discussed below.

Availability outcomes

The outcome, or end result, of availability is more than just ‘the site is down’. What does ‘down’ mean? Is it really ‘down’ or is that just the (possibly valid) opinion a frustrated user (that is trying to capture an online claim after arriving late to work because they crashed their car)? The outcomes of availability are those behaviours that are perceived by the end users, as listed below.

Failure

The obvious visible indication of an unavailable application is one that indicates to the end user that something has failed and no amount of retrying on the users’ part makes it work. The phrase ‘is down’ is commonly used to describe this situation, which is an obvious statement about the users’ perception and understanding of the term ‘down’, rather than a reasonable indication of failure. The types of failure include,

  • Errors — where the application consistently gives errors. This is often seen on web applications where the chrome all works but the content has an error, or garbage.
  • Timeouts – an application that takes too long to respond may be seen as being ‘down’ by the user or even the browser or service that is calling it.
  • Missing resources – a ‘404 – Not found’ response code can have devastating effects on applications beyond missing image placeholders, missing scripts or style sheets can ‘down’ an application.
  • Not addressable – a DNS lookup error, a ‘destination host unreachable’ error and other network errors can create the perception that an application is unavailable regardless of its addressability from other points. This is particularly common for applications that don’t use http ports and network traffic gets refused by firewalls.
Responsiveness

While it may be easy to determine that an application that is switched off is unavailable, what about one that performs badly? If, for example, a user executes a search and it takes a minute to respond, would the user consider the application to be available? Would the operators share the same view? Apdex (Application Performance Index) incorporates this concept and has an index that classifies application responsiveness into three categories, namely: Satisfied, Tolerating, and Frustrated. This can form a basis for developing a performance metric that can be understood, and also serves as a basis to acknowledge that in some cases we will experience degraded performance, but should not have too many frustrated users for long or critical periods.

Reliability

In addition to features being snappy and responsive, users also expect that features can be used when they are needed and perform the actions that they expect. If, for example, an update on a social media platform posts immediately (it is responsive), but is not available for friends to see within a reasonable time, it may be considered unreliable.

Availability influencers

While the availability outcomes receive focus, simply focussing on outcomes, by saying “Don’t let the application go down”, fails to focus effort and energy on the parts of the application that ultimately influence availability. Some of these availability influencers are discussed below.

Quality

The most important, and often unconsidered, influence on availability is the quality of the underlying components of the system. Beyond buggy (or not) code, there is the quality of the network (including the users’ own device), the quality of the architecture, the quality of the testing, the quality of the development and operational processes, the quality of the data and many others. Applications that have a high level of quality, across all aspects of the system, will have higher availability — without availability being specifically addressed. An application hosted in a cheap data centre, with jumble of cheap hardware, running a website off a single php script thrown together by copying and pasting off forums by a part time student developer will have low availability — guaranteed.

Fault tolerance

Considering that any system is going to have failures at some point, the degree to which an application can handle faults determines its availability. For example, an application that handles database faults by failing over to another data source and retrying will be more available than one that reports an error.

Scalability

If a frustratingly slow and unresponsive application can be considered to be unavailable (not responsive or reliable) and this responsiveness is due to high load on the application, then the ability to scale is an important part of keeping an application available. For example, a web server that is under such high load that it takes 20 seconds to return a result (unavailable) may be easily addressed by adding a bunch of web servers.

Maintainability

If a fault occurs and an application needs to be fixed, the time to recovery is an important part of availability. The maintainability of the application, primarily the code base, is a big part of the time that it takes to find, fix, test and redeploy a fixed defect. For example, applications that have no unit tests and large chunks of code that has not been touched in years wouldn’t be able to fix a problem quickly. This is because a large code base needs to be understood, impacts need to be understood and regression tests performed — turning a single line code change into days of delays in getting an important fix deployed.

Serviceability

Modern web based applications don’t have the luxury of downtime windows for planned maintenance that exist in internal enterprise applications (where planned maintenance frequently occurs on weekends). The ability of an application to have updates and enhancements deployed while the application is live and under load is an important aspect of availability. A robust and high quality application will have low availability if the entire system needs to be brought down for a few hours in order to roll out updates.

Recoverability

Assuming that things break, the speed at which they can be fixed is a key influencer of availability. The degree of recoverability in an application is largely up to the operational team (including support/maintenance developers and testers) to get things going again. The ability to diagnose the root cause of a problem in a panic free environment in order to take corrective action that is right the first time is a sign of a high level of operational maturity and hence recoverability.

Detectability

If availability is measured in seconds of permissible downtime, only knowing that the application is unavailable because a user has complained takes valuable chunks out of the availability targets. There is the need not only for immediate detection of critical errors, but for the proactive monitoring of health in order to take corrective action before a potential problem takes down the application.

I am working with an SME customer at the moment that is big enough to have a high dependency on their website but not big enough to have an operational team available feeding and watering their systems all day, every day. This type of customer is not only common in the self-service cloud, but is the hottest, and probably biggest, target market.

So we’re building an AWS based system that has been architected, from the ground up to be loosely coupled, failure resilient and scalable. We have multiple load balanced and auto-scaled web servers across multiple availability zones. We have mongoDB replica sets across multiple machines and a hot-standby RDS MySQL database. We have Chef, with all its culinary nomenclature of recipes and knife telling all the servers what to do when they wake up. We have leant towards AWS services such as S3 and SQS because of their durability instead of trying to roll our own. We have engineered the system so that even if multiple failures occur that the solution will still serve requests until the next day when someone comes in and fixes things – much like a 747 can have multiple engine failures and still operate adequately without a need to repair or replace any of the engines while in flight.

In a nutshell, we have made all the right technical and architectural decisions to ensure that things will be as automated as possible and if something goes bump in the night, that there is no need to panic.

So we were asked what would happen if something did go horribly wrong at 3am? Something not even related to OS/hardware/network failure. Something not preventable through planned maintenance like disk space or suboptimal indexes. What about something that is the result of a bug in an edge case or bad data coming in over a feed? What do you do when something happens that your automated, decoupled, resilient and generally awesome system falls over for some unknown reason?

You call the person who can fix it.

Calling an expert to look at a problem is something that happens every day (or night) in data centres around the world, whether internal enterprise or public hosting services. Someone is sitting on night shift watching a bunch of blinking lights. When a light flashes orange he sends and email or a text message. When it flashes red he picks up the phone and, according to the script and the directory in front of him, calls the person who is able to fix or diagnose the problem. Apart from the general rudeness you may get when phoning someone up at 3am, the operator making the call can do so with confidence because that person is on call and the script says that they should be notified. If they can’t get hold of someone because they can’t hear their mobile in the nightclub at 3am, the operator is not at a loss as to what to do – the script has a whole host of names of supervisors, operational managers and backup people that are, according to the script, both interruptible and keen to deal with the matter at 3am.

If you are running your system on AWS (or any similar self-service public cloud infrastructure) you don’t have a moist robot who has the scripts or abilities to call people when things go wrong. Sure, you can send a gazillion alert emails, but nobody reads their email at 3am. Even if they do see the email or text message they may think that the other people on the distribution list are going to respond – so they turn over and go back to sleep.

You would think that it there is a business out there that will do this monitoring for you out of Bangalore or somewhere else where people to do monitoring are cheap. We want cheap people to do the monitoring because, by virtue of running on AWS, we are trying to do things as cheap as possible, so cheap is good. Granted, those people doing the monitoring won’t be able to restore and database and rerun transaction logs unsupervised at 3am, but we wouldn’t want them to and neither would we want that from our traditional hosting provider (because we are cheap, remember). So if we contracted in someone for first line support we would (at least) get people who have a script, a list of contact people, telephone and a friendly demeanour.

But what would those monitoring people offer that we can’t automate? Surely if they’re not doing much application diagnosis and repair then the tasks that they perform can be automated? What you get from moist robot monitoring, that you don’t get with automated alerts, is a case managed synchronous workflow. Synchronous because you pick up the phone and if no one answers you know to go to the next step; unlike emails where you don’t know if anyone has given it any attention. Workflow is the predefined series of steps to go through for each event. And case managed gives you the sense of ownership of a problem and the responsibility to do what you can (contacting people and escalating) in order to get it resolved.

But we’re engineers who like to automate things; surely even this can be automated?

Obviously there is an engineering solution to most things in life, and the Automated First Line Support system (AFLS) would look something like this…

Metrics

You need some things to measure in order to pick up if something has gone wrong. This could be simple things that we are used to from something like Amazon CloudWatch which can monitor infrastructure level problems – cpu load, memory, io etc. You can also monitor generic application metrics such as those monitored by New Relic – page response times, application errors, requests per minute, cache hits, ISS application pool memory use, database query times and, in New Relic’s case, important metrics such as the overall Apdex on the system. You will also need to build custom metrics that are coded into the system. Say, for example, the system imported data; you could measure the number of rows imported per second. You can measure the number of failed login attempts, abandoned baskets, comments added; anything really that is important to the running of the system.

Triggers

Once you are collecting a whole lot of data you need to be able to do something with it. Id’ be loath to talk about a “rules engine” but you would need some DSL (Domain Specific Language) to figure out how to trigger things. It gets tricky when you consider temporality (time) and other automated tasks. Consider a trigger that looks something like this

When Apdex drops below 0.7 and user load is above mean and an additional web server has already been added and the database response times are still good and we haven’t called anybody in the last ten minutes about some other trigger and this has continued for more than five minutes, then run workflow “ThingsAreFishy”

On Duty Schedule

Before any workflow runs you need to have a handle of who (as in real people) are on duty and can be called. This could be the primary support contact person, a backup, their supervisor, the operational manager and even, if all else fails, the business owner who can drive to someone’s house and get them out of bed. The schedule has to be maintained and accurate. You don’t want a call if you are off duty. A useful feature of the schedule would also be the ability to record who received callouts so that remuneration can be sorted out.

Voice Dialler and IVR System

You would need the AFLS to be able to make calls and talk to the person (possibly very slowly at 3am) and explain what the problem is. This is fairly easy to do and products and services exist that will translate your workflow steps and messages into voice. It would also be useful to have IVR acknowledgement prompts as well “Press 1 if you can deal with the problem… Press 2 if you want me to wake up Bob instead… Press 3 if you will look at it but are too drunk to accept responsibility for what happens”

“I’m Fixing” Mode

The AFLS will need to detect when it should not be raising triggers. If you are doing a 4am Sunday deployment (as the lowest usage period) and are bouncing boxes like ping-pong balls, the last thing you want is an automaton to phone up your bosses boss and tell him that all hell is breaking loose.

Host Platform Integration

Some triggers and progress data will need to come from the provider of the platform. If there is a major event in a particular data centre all hell may be breaking loose on your system as servers fail over to another data centre. Your AFLS will need to receive a “Don’t Panic. Yet.” message from the hosting providers’ system in order to adjust your triggers accordingly. There is no point in getting out of bed if the data centres router went down for three minutes and now, five minutes later, when you are barely awake, everything is fine.

Workflow

All of this needs to hang together in an easy to use workflow system and GUI that allows steps and rules to be defined for each of your triggers. The main function of the workflow is not diagnosis or recovery, but bringing together the on-duty schedule and the voice dialler to get hold of the right person. It would also be great if workflows could be shared, published and even sold in a “Flow Store” (sorry Apple, I got it first) so that a library of workflows can be built up and tweaked by people either much smarter than you are or more specialised with specific triggers that you are monitoring.

Cheap

The AFLS should cost a few (hundred maybe) dollars per server per month. None of that enterprise price list stuff will do.

Self Service

Like everything else that we are consuming on the public cloud, it needs to be easy to use and accessible via a web control panel.

Is anybody building one of these?

Obviously you don’t want to build something like this yourself; otherwise you land up with an Escheresque problem of not being able to monitor your monitor. Automated First Line Support (AFLS) needs to be build and operated by the public cloud providers such as Amazon, Google or Microsoft (Do you see, @jeffbarr, that I put an ‘A’ in the front so that you can claim it for Amazon). Although they may want someone from their channel to do it, you still need access to the internal system APIs to know about datacentre events taking place.

Unfortunately the likes of AWS and Google don’t have full coverage of metrics either and need something like New Relic to get the job done. Either New Relic should expose a broader API or they should get bought by Amazon; I’m for the latter because I’m a fan of both.

Regardless of who builds this, it has to be done. I’ve just picked up this idea from the aether and have only been thinking about the problem for a day or two. No doubt that somebody has given this a lot more attention than me and is getting further than a hand-wavey blog post. As the competitive market heats up, it is imperative that the mega public clouds like AWS, Azure and GAE, that traditionally don’t have monitoring services and aren’t used to dealing directly with end users do some sort of AFLS. If they don’t, the old school hosters that are getting cloudly, like Rackspace are going to have a differentiator that makes them attractive. Maybe not to technical people, but to the SME business manager who has to worry about who gets up at 3am.

Update: @billlap pointed out that PagerDuty does SMS and voice alerts on monitoring – looks like it does a lot of the stuff that is needed.

Simon Munro

@simonmunro

Underpowered machines chosen by a faceless Excel user are the bane of developers in corporate environments. How can we get them to understand that they are simply not qualified to make decisions about the tools that we need?

My late grandfather was a carpenter by trade and I grew up hearing stories of how he hand planed wooden floors during the great depression and we would drive twice over every bridge he built (himself apparently) until he stopped in his eighties. He never lost his knack and love for carpentry and lived in one house for over fifty years with the most well equipped and well organized workshop that I have ever seen. He built stuff for the neighbourhood and transferred skills to my dad, who landed up being a civil engineer and builder, and my uncle who is an orthopaedic surgeon (carpentry on people). I spent enough time with him to learn something about using large bench power tools and hand tools of the trade – I still sharpen tools before I use them, lay my plane on its side and a few other ‘wax on, wax off’ habits.

While studying I used to work with my dad’s construction business during holidays and became quite proficient at roofing, hanging doors, built-in cupboards, steel erection and a few other professional building skills. Because most work was on site, we had an array of hand power tools that irked my traditional grandfather but were necessary to get the job done – and we were proud of our collection. In my heyday I could hang fifteen doors in a day and used no less than five power tools (drill, screwdriver, circular saw, planer, router) to get the job done.

Every tool had a use and purpose; while as a DIYer these days I make use of one drill, at the time I had use of about five or six different types depending on the need. Most tools were of a high quality; we used products from Hilti (which most DIYers know for fastenings, not big power tools) and the Bosch Blue range (which are better and more rugged than their green DIY counterparts) and with them felt a sense of belonging to a guild that had trade secrets.

The tools that I use these days are somewhat different, they are a lot less manly and generally don’t require that you wear safety goggles. Most of them don’t scream, kick and cut with the same sense of satisfaction and danger as a 2KW angle grinder; at least in a literal sense (I am seldom able to wield Linux without a fear of personal injury). The tools that I use today have their professional variants, with the accompanying sneers at the amateurs and their Excel macros. They are expensive, complex, resource hungry and require years of skill, practice and sharpening to produce the desired results.

The nice thing about most of the tools that I use is that, like Bosch Blue power tools, they are only known to the trade. People who know what I am talking about when referring to, say ORMs, know enough to help me and those that don’t know what I am talking about will sign off without asking enough questions to show their blissful ignorance. There is, as always, one exception – and that is the tool, the laptop, that I am using right now.

One of the tools that I use is a Mac Air, and I love my Mac Air – especially that in Office 2011 there is now a corporate office suite that works. It is light, portable and powerful enough for the grind for web browsing, Word, Excel, Email and other tasks that I need to perform as a run-of-the-mill office worker. By any measure of professional tools, the Mac Air is underpowered (with only 2GB of RAM) but I don’t even try to use it as a professional development tool. I realise, however, that it is comparable to the level of tooling required by the person who makes decisions about the laptop I should use as a professional developer – and that is a problem.

I am embarrassed to admit that as a professional .NET developer, my official standard assigned machine is a wussy 4GB laptop running 32bit XP. Like many of my colleagues I have gone rogue and installed 64bit Windows 7 and have gone off the radar (and domain) as far as getting internal support is concerned. That isn’t really a problem as all developers are able to support their own machines better than a telephone help desk, thank you very much.

So I am stuck using a tool that some Excel user has deemed sufficient and has, in the power tool analogy, picked up from the bargain bin at Homebase.

If portability is required, .NET developers need a machine with 8GB RAM, Intel i5 or i7 and an SSD hard drive. Nothing less is acceptable. Personally I’m not a fan of big, heavy 19” laptops with fancy graphics cards and power supplies the size and weight of a brick, so I prefer to go lean 15” with basic graphics. If a developer is going to be sitting at the same desk for more than two months he or she needs, at a minimum 12GB of RAM and dual monitors on a desktop machine – there is no point in lugging around a laptop when a desktop can be used, and these days good developers can maintain more than one environment easily and can get the latest version of source code to work from home if needed.

There is little argument over this by developers, only frustration at the standard spec that they are provided with. A developer can lose half a day a week in productivity by using an underpowered machine. To help the Excel decision makers, I’ve included a small spreadsheet that does the maths.

…all this because developers should not spend £1500-£2000 on a decent machine. Apparently there are other costs associated with supporting non-standard equipment, tracking assets and a whole lot of accompanying excuses.

A good machine is more than just a luxury ,occasional use, tool of the trade; it is the base tool, is is the white van. Can you imagine your white van man being without his white van and trying to do work carting all of his tools on public transport?

Isn’t it time that people took our need for tools seriously?

Simon Munro

@simonmunro

The recent furore that erupted over Microsoft’s ‘shifting’ of focus for Silverlight and their somewhat wishy-washy attempts to clarify their position illustrates a fundamental disconnect between the Microsoft machine and the lives and careers of the developers that use their tools.

An enterprise CIO will cast his eyes over his domain and see a wide range of technologies in use and although he will use his enterprise architecture minions to herd technologies in some direction, he knows that there will be a lot of varying technologies being used – the decisions about which technologies to choose are complex and in many cases dictated by the vendor of the product chosen by the business. Likewise, a vendor technology strategist will build products that use many different technologies in order to support the varieties in the market and their big customers. Standing outside the technical trenches it is obvious that there is no preferred technology and besides, taking a bet on any particular technical bias is risky and best left to successors. It is difficult then for these people to realise or even understand the low level religious technical wars going on amongst the troops that are assembling one tiny piece over the overall solution.

But just because you can’t see it, it doesn’t mean that developers hedging their best on one or another technology does not exist. The ‘religious wars’ that we are familiar with, such as Java vs C#, Ruby vs Python and Lisp vs Common Sense are a ubiquitous testament to the dedication, passion and possibly blinding fanaticism that developers have for their particular chosen technology. The reason for this is quite simple, developing senior skills in any technology means that the average developer has to largely ignore others. Average to good developers (I am explicitly excluding ninjas) are unable to be considered masters at two wildly different programming languages, paradigms or frameworks. It is difficult enough as is, to become a master at a technology during a measly eight hour day, never mind going home and spending a single hour becoming at master at another.

While it may be sensible to have broad ‘generalist’ skills, and to a degree most good developers do, in the current employment market the best way to get a well paid job or contract is to have in-depth specialist skills on the technology being requested. A senior ASP.NET web developer used to leading teams and delivering complex solutions will find it difficult to be accepted to take the same role and for the same rate at a Java or Rails shop regardless of how much time has been spent as a Java or Rails hobbyist. Ultimately most developers (especially expensive Western developers) have a market value that is only as high as the value of their detailed technical skills from their last gig – regardless of years of other skills and experiences. And while we may blame the developers for their lack of foresight, we perpetuate this problem ourselves during interviews by asking specialist questions that only a developer with current, hands-on and detailed technical knowledge will be able to answer.

So the mistake that Microsoft made during PDC in seeming to support the ‘Silverlight is dead’ message by overplaying their plans for HTML5 was to ignore the developers that passionately believe that their medium term futures are dependant on Silverlight. Here in the City of London, it seems that Silverlight is gaining ground in financial services applications and many developers are whole hog, full time into Silverlight development (at very high contract rates I might add). Their value for their next gig will be determined by how much demand remains for the Silverlight skills that they have developed and a message about the death of Silverlight means that they will potentially hit the street as a junior to mid level web developer next time around. Microsoft, it seems, made assumptions about the fanatical support of their .NET developers without realising that that support is waning and their technology base is so broad that there are technology battles underway across Microsoft’s broad technology base.

I was interested to see Mike Taulty’s take (a well known UK Evangelist) in his post ‘Silverlight *versus* HTML5? Really?’ where he neatly lays out the argument for their being a place for various technologies but he makes the mistake of being a marketing person looking at the overall IT problem rather than as specialist developer. I especially like his comment ‘On the question of investment in Silverlight – yes, I’ve made that investment too.’ – with all due respect to Mike, the credentials used to find his next job will not be for his Silverlight coding skills, but for his awesome evangelist abilities.

An attraction to developers of more open source frameworks is that their demand is more natural and organic – not subject to marketing budgets, product positioning and acquisitions by mega corporations. For front end web skills, developers are comfortable putting a whole lot investment in JQuery because it is not subject to any vendors product focus and has a lifespan determined by developer support rather than how many developers are on the engineering teams at Redmond or how Microsoft intends to tackle the iPad problem.

In my twenty years in professional software development I have made a few major shifts in my preferred technology. It is difficult, frustrating and takes a while to get up to the same rate of delivery – but you get there in the end. So if a Silverlight developer were to take the huge investment to develop for HTML5, then why even bother going down the Microsoft route? Developers are finding that frameworks such as Rails and Django are fun to play with after hours, will get the same result and don’t have the risk that Microsoft will, yet again, cut their value at the whim of this financial years’ marketing objectives.

Simon Munro

@simonmunro

On of the problems when working with spatial data is figuring out what to GROUP BY when aggregating data spatially. I’ll talk more about this in a future post and, for one of the approaches I tried, I needed the ability to draw an arbitrary geography polygon (square in my case) that is yay long and wide.

Obviously with the geography type, you can’t just add n units to the x/y axes as you need geographic co-ordinates (latitude and longitude). I needed a function were I could find out the co-ordinates of a point, say, 500 metres north of the current position.

I turned to Chris Veness’ excellent source of spatial scripts (http://www.movable-type.co.uk/scripts/latlong.html) and cribbed the rhumb line function, converting it into T-SQL.

The function takes start point, a bearing (in degrees) and a distance in metres – returning a new point.

Please note that this function is hard coded for WGS84 (srid 4326) and may need tweaking to get the earths radius out of sys.spatial_reference_systems or changed to suit your requirement.

Disclaimer: This code is provided as-is with no warranties and suits my purpose, but I can’t guarantee that it will work for you and the missile guidance system that you may be building.

Simon Munro

@simonmunro

CREATE FUNCTION [dbo].[RumbLine]
(
@start GEOGRAPHY,
@bearing FLOAT,
@distance FLOAT
)
RETURNS GEOGRAPHY
AS
BEGIN
--Rumb line function created by Simon Munro
--Original post at Original post at
--http://simonmunro.com/2010/10/13/rumb-line-function-for-sql-server
--Algorithm thanks to
--http://www.movable-type.co.uk/scripts/latlong.html
--Hard coded for WGS84 (srid 4326)
DECLARE @result GEOGRAPHY;
DECLARE @R FLOAT = 6378137.0;
DECLARE @lat1 FLOAT, @lon1 FLOAT;
DECLARE @lat2 FLOAT, @lon2 FLOAT;

SET @distance = @distance/@R;  
SET @bearing = RADIANS(@bearing);
SET @lat1 = RADIANS(@start.Lat);
SET @lon1 = RADIANS(@start.Long);

SET @lat2 = ASIN(SIN(@lat1)*COS(@distance) + COS(@lat1)*SIN(@distance)*COS(@bearing));

SET @lon2 = @lon1 + ATN2(SIN(@bearing)*SIN(@distance)*COS(@lat1), COS(@distance)- SIN(@lat1)*SIN(@lat2));
SET @lon2 = CONVERT(DECIMAL(20,8), (@lon2+3*PI())) % CONVERT(DECIMAL(20,8),(2*PI())) - PI();

DECLARE @resultText VARCHAR(MAX);
SET @resultText = 'POINT('+CONVERT(VARCHAR(MAX),DEGREES(@lon2)) +' '+ CONVERT(VARCHAR(MAX), DEGREES(@lat2))+')';
SET @result = geography::STGeomFromText(@resultText, 4326)

RETURN @result;
END;

There is little doubt that organic farming is the right way to farm – how could it be otherwise? Organic farming does not rely on toxic input anywhere in the production chain and results in a product that is better, healthier and more natural for consumers. Even though it is practiced on a smaller scale, organic farming is gaining market share of a discerning consumer where the invisible aspects of the end product are considered superior.

The software craftsmanship movement is a lot like organic farming – a well crafted, testable, separated solution using patterns, practices, tools and conventions understood by all software craftsmen is, without a doubt, the right way to build software.

Organic farmers are probably, in a way, better farmers than their inorganic counterparts. An appreciation for the entire ecosystem is required to fight off parasites and disease – wild plants are encouraged around fields to keep vermin away and livestock roam free over sparsely populated lands to fend of the diseases associated with being cooped up. Inorganic farmers, while having to understand more about the science of high intensity farming and all the products and equipment available on the market, have less of a need to understand the holistic natural environment.

Likewise, the self-proclaimed members of the software craftsmanship movement are generally (and on average) better developers than their cubicle bound corporate counterparts. These developers spend personal time learning new languages, techniques and patterns – continuously improving their skills and pushing their own craftsmanship. While a corporate developer may tinker with the latest tool demonstrated by the product vendor after sipping the (possibly toxic) Kool-aid, the software craftsman with pick apart and debate with his peers while pointing out that the already available open source framework is superior anyway.

Inorganic farmers would scoff at the thought of being considered less of a farmer than their organic counterparts – after all, there is nothing trivial about managing a highly mechanised farm covering thousands of hectares. They would rightly argue that there is more to inorganic farming than taking a soil sample to a lab and matching it up with barrels of chemicals. Inorganic farmers will also point out that organic farming, on its own, cannot cater to the needs of the market – billions of people need to be fed and often this needs to be done in an environment that has substandard land fertility, erratic weather patterns downright nasty bugs. Besides, organic farming is slow (land needs to lie fallow) and risky (disease can wipe out entire crops).

Corporate developers, while acknowledging and sometimes holding in high regard the software craftsmen, are fairly convinced that the pure approach advocated does not work in their environment. Corporate developers have legacy systems, tight deadlines, users who want nothing more than a spreadsheet and, unfortunately, an entire multinational corporation that is fairly dismissive of the IT cost centre – where individual ability is less important than vendor position in Gartner’s magic quadrant. Corporate developers have a special set of needs that have less to do with quality of the end product than budgets, quarters, politics, non-coding architects, committees, fascist data centre operators and a whole bunch of stuff that renders code quality, maintainability and craftsmanship quaint and somewhat pointless.

The biggest argument against organic farming is the cost. Luckily discerning consumers are being educated and becoming prepared to pay a premium for not being unwittingly poisoned, but the cost of producing a certain quality of product with available resources is always going to be higher than the inorganic counterpart.

Writing good software is complex and hard and therefore expensive. Although elite developers will be able to churn it out quickly, the effort and skill is too high for your average corporate development shop. While in the long run this may be toxic (costly) for their employers, the application needs to get out the door today and paid for in this quarter so a bit of junk food may be in order. After all, things can be detoxified later when the application is rewritten.

I don’t know this for certain, but I am sure that organic farmers hang out at organic farmers markets (or wherever they hang out) and lament the upgrade of their favourite seed spreader to double as a (shock, horror) pesticide spreader. “John Deer has gone evil” they will say, “What an epic fail! John Deer should encourage people to do organic farming”.

The developers who hang out in corners of the Internet flaming the latest offerings, betas and blog posts from large vendors are not ‘better’ developers than everyone else, nor are they ‘elite’. They are organic developers that are trying to make the software world a better, safer and less toxic place for us all. The organic developers also forget, occasionally, the tools, equipment, skills and arability of the corporate or casual development environment that is not interested, or ready, to fully embrace organic development.

As a consumer, if I tried to only buy organic food I would limit my choices and be hindered by cost so I am glad to have both and hope that the big commercial farmers are gradually learning some of the lessons learned in organic farming.

Simon Munro

@simonmunro

Disclaimer: I was brought up on small farms and can milk a cow and plough a field, but I don’t claim to know much about farming – organic or otherwise. Please do not take farming advice from a developer.

An investigation triggered by the lack of support of spatial data in SQL Azure has left me with the (unconfirmed) opinion that although requested by customers, the support of spatial data in SQL Azure may not be good enough to handle the requirements of a scalable solution that has mapping functionality as a primary feature.

Update: SQL Azure now has spatial support. The arguments made in this post are still valid and use of spatial features in SQL Azure should be carefully considered.

I have been asked to investigate the viability of developing a greenfields application in Azure as an option to the currently proposed traditional hosting architecture.  The application is a high load, public facing, map enabled application and the ability to do spatial queries is on near the top of the list of absolute requirements.  The mapping of features from the traditionally hosted architecture is fine until reaching the point of SQL 2008’s spatial types and features which are unsupported under SQL Azure – triggering further investigation.

It would seem that the main reason why spatial features are not supported in SQL Azure is because those features make use of functions which run within SQLCLR, which is also unsupported in SQL Azure.  The lack of support for SQLCLR is understandable to a a degree due to how SQL Azure is setup – messing around with SQLCLR on multitenant databases could be a little tricky.

The one piece of good news is that some of the assemblies used by the spatial features in SQLCLR are available for .NET developers to use and are installed into the GAC on some distributions (R2 amongst them) and people have been able to successfully make use of spatial types using SQL originated/shared managed code libraries.  Johannes Kebeck, the Bing maps guru from MS in the UK, has blogged on making use of these assemblies and doing spatial oriented work in Azure.

So far, it seems like there may be a solution or workaround to the lack of spatial support in SQL Azure as some of the code can be written in C#.  However, further investigation reveals that those assembles are only the types and some mathematics surrounding them and the key part of the whole process, a spatial index, remains firmly locked away in SQL Server and the inability to query spatial data takes a lot of the goodness out of the solution.

No worries, one would think – all that you need to do is get some view into the roadmap of SQL Azure support for SQL 2008 functionality and you can plan or figure it out accordingly.  After all, the on the Microsoft initiated, supported and sanctioned SQL Azure User voice website mygreatsqlazureidea.com, the feature ‘Support Spatial Data Types and SQLCLR’ comes out at a fairly high position five on the list with the insightful comment ‘Spatial in the cloud is the killer app for SQL Azure. Especially with the proliferation of personal GPS systems.’ The SQL Azure team could hardly ignore that observation and support – putting it somewhere up there on their product backlog.

When native support for spatial data in SQL Azure is planned is another matter entirely and those of us on the outside can only speculate.  You could ask Microsoft directly, indirectly or even try and get your nearest MVP really drunk and, when offered the choice between breaking their NDA and having compromising pictures put up on Facebook, will choose the former.

Update: You use your drunk MVP to try and glean other information as it was announced that SQL Azure will support spatial data in June 2010 http://blogs.msdn.com/sqlazure/archive/2010/03/19/9981936.aspx and  http://blogs.msdn.com/edkatibah/archive/2010/03/21/spatial-data-support-coming-to-sql-azure.aspx (see comments below).  This is not a solution to all geo-aware cloud applications, so I encourage you to read on.

I have n-th hand unsubstantiated news that the drastic improvements for spatial features in SQL 2008 R2 were made by taking some of the functionality out of SQLCLR functions and putting them directly into the SQL runtime which means that even a slightly deprecated version of SQL Azure based on R2, which I think is inevitable, would likely have better support for spatial data.

Update:  In the comments below, Ed Katibah from Microsoft, confirms that the spatial data support is provided by SQL CLR functionality and not part of the R2 runtime.

In assessing this project’s viability as an Azure solution, I needed to understand a little bit more about what was being sacrificed by not having SQL spatial support and am of the opinion that it is possibly a benefit.

Stepping back a bit, perhaps it is worthwhile trying to understand why SQL has support for spatial data in the first place.  After all, it only came in SQL 2008, mapping and other spatial applications have been around longer than that and, to be honest, I haven’t come across many solutions that use the functionality.  To me, SQL support of spatial data is BI Bling – you can, relatively cheaply (by throwing a table of co-ordinates against postal codes and mapping your organizations regions) have instant, cool looking, pivot tables, graphs, charts and other things that are useful in business.  In other words, the addition of spatial support adds a lot of value to existing data,  whose transactions do not really have a spatial angle.  The spatial result is a side effect of (say) the postal code, which is captured for delivery reasons rather than explicit BI benefits.

The ability to pimp up your sales reports with maps, while a great feature that will sell a lot of licences, probably belongs as a feature of SQL Server (rather than the reporting tool), I question the value of using SQL as the spatial engine for an application that has spatial functionality as a primary feature.  You only have to think about Google maps, streetview and directions with the sheer scale of the solution and the millions of lives it affects and ask yourself whether or not behind all the magic there is some great big SQL database serving up the data.  Without knowing or Googling the answer, I would suggest with 100% confidence that the answer is clearly ‘No’.

So getting back to my Azure viability assessment, I found myself asking the question.

If SQL Azure had spatial support, would I use it in an application where the primary UI and feature set is map and spatially oriented?

But before answering that I asked,

Would I propose an architecture that used SQL spatial features as the primary spatial data capability for a traditionally hosted application where the primary UI and feature set is map and spatially oriented?

The short answer to both questions is a tentative no.  Allow me to provide the longer answer.

The first thing to notice about spatial data is that things that you are interested in the location of don’t really move around much.  The directions from Nelsons Column to Westminster Abbey are not going to change much and neither are the points of interest along the way.  In business you have similar behaviour – customers delivery addresses don’t move around much and neither do your offices, staff and reporting regions.  The second thing about spatial data is the need to have indexes so that queries, such as the closest restaurants to a particular point, can be done against the data and spatial indexes solve this problem by providing tree like indexing in order to group together co-located points.  These indexes are multidimensional in nature and a bit more complex than the flatter indexes that we are used to with tabular data.

Because of the slow pace at which coastlines, rivers, mountains and large buildings move around, the need to have dynamically updated spatial data, and hence their indexes, is quite low.  So while algorithms exist to add data to spatial indexes, the cost of performing inserts is quite expensive, so in many cases indexes can be rebuilt from scratch whenever there is a bulk modification or insert of the underlying data.

So while SQL Server 2008 manages spatial indexes as with any other index, namely by updating the index when underlying data changes, I call into question the need for having such functionality for data that is going to seldom change.

If data has a low rate of change, spatial or not, it becomes a candidate for caching, and highly scalable websites have caching at the core of their solutions (or problems, depending on how much they have).  So if I were to scale out my solution, is it possible to cache the relatively static data and the spatial indexes into some other data store that is potentially distributed across many nodes of my network?  Unfortunately, unlike a simple structure like a table, the data within a spatial index (we are talking about the index here and not the underlying data) is wrapped up closely to the process or library that created it.  So, in the case of SQL Server, the spatial index is simply not accessible from anywhere other than SQL Server itself.  This means that I am unable to cache or distribute the spatial indexes unless I replicate the data to another SQL instance and rebuild the index on that instance.

So while I respect the functionality that SQL Server offers with spatial indexing, I question the value of having to access indexed data in SQL server just because it seems to be the most convenient place to access the required functionality (at least for a Microsoft biased developer).  If my application is map oriented (as opposed to BI bling), how can I be sure that I won’t run into a brick wall with SQL server with spatial indexes in particular.  SQL server is traditionally known as a bottleneck with any solution and putting my core functionality into that bottleneck, before I have even started and without much room to manoeuvre is a bit concerning.

I should be able to spin up spatial indexes wherever I want to and in a way that is optimal for a solution.  Perhaps I can have indexes that focus on the entire area at a high level and can generate lower level ones as required.  Maybe I can pre-populate some indexes for popular areas or if an event is going to take place in a certain area.  Maybe I am importing data points all of the time and don’t want SQL spending time churning indexes as data, which I am not interested in yet, is being imported.  Maybe I want to put indexes on my rich client so that the user has a lighting fast experience as they scratch around in a tiny little part of the world that interests them.

In short, maybe I want a degree of architectural and development control over my spatial data that is not provided my SQL’s monolithic approach to data.

This led me to investigating other ways of dealing with spatial data (generally), but more specifically spatial indexes.  Unsurprisingly there are a lot of algorithms and libraries out there that seem to have their roots in a C and Unix world.  The area of spatial indexing is not new and a number of algorithms have emerged as popular mechanisms to build spatial indexes.  The two most popular are R-Tree (think B-Tree for spatial data) and Quadtree (where a tree is built up by dividing areas into quadrants).

There is a wealth of information on these fairly well understood algorithms and event Microsoft’s own implementations do not fall far from these algorithms.  Bing maps uses ‘QuadKeys’ to index tiles, seemingly referring to the underlying Quadtree index.  (SQL Server is a bit different though, it uses a four level grid indexing mechanism that is non recursive and uses tessellation to set the granularity of the grid.)

So if all of this spatial data stuff is old hat, surely there are some libraries available for implementing your own spatial indexes in managed code?  It seems that there are some well used open source libraries and tools available.  Many commercial products and Sharpmap, an OSS GIS library, make use of NetTopologySuite, a direct port of the Java based JTS.  These libraries have a lot of spatial oriented functions, most of which only make vague sense to me, including a read only R-Tree implementation.

Also, while scratching around, I got the sense that Python has emerged as the spatial/GIS language of choice (it makes sense considering all those C academics started using Python).  It seems that there are a lot of Python libraries out there that are potentially useful within a .NET world using IronPython.

It is still early in my investigation, but I can’t help shaking the feeling that making use of SQL 2008 for spatial indexing because that is the only hammer that Microsoft provides is not necessarily the best solution.  This is based on the following observations:

  • Handling of spatial data is not new – it is actually a mature part of computer science.  In fact SQL server was pretty slow to implement spatial support.
  • An RDBMS like SQL or Oracle may be a good place to store data, but not necessarily the best place to have your indexes.  The SQL bias towards data consistency and availability are counter to the demands of spatial data and their indexes.
  • In order to develop a map oriented solution, a fine degree of control over spatial data may be required to deliver the required functionality at scale.

While I am not against OSS, evaluating libraries can be risky and difficult and I am stunned at the lack of support for spatial data in managed code coming out of Microsoft.  Microsoft needs to pay attention to the demand for support of spatial data for developers (not just database report writers).  The advent of always connected geo-aware mobile devices and their users’ familiarity with maps and satnav, will push the demand for applications that are supportive of geographic data.  It is not unlikely to picture the teenager demand for a map on their mobile devices that shows the real time location of their social network.

To support this impending demand, Microsoft needs to make spatial data a first class citizen of the .NET framework (system.spatial).  It wouldn’t take much, just get some engineers from SQL and Bing maps to talk to each other for a few weeks.  Microsoft, if you need some help with that, let me know.

In the meantime I will walk down the road of open source spatial libraries and let you know where that road leads.

Simon

@simonmunro

On 1 February 2010, when Microsoft Azure officially goes into production, the CTP version will come to an end.  In an instant, thousands of Azure apps in some of the remotest corners of the Internet, built with individual enthusiasm and energy, will wink out of existence – like the dying stars of a discarded alternative universe.

Sadly, the only people that will notice are the individual developers who took to Azure, figured out the samples and put something, anything, out there on The Cloud and beamed like proud fathers and remembering their first Hello World console app.  For the first time we were able to point to a badly designed web page that was, both technically and philosophically, In The Cloud.  Even though the people that we showed barely gave it a second look (it is, after all, unremarkable on the surface) we left it up and running for all the world to see.

Now, Microsoft, returning to its core principles of being aggressively commercial, is taking away the Azure privilege and leaving the once enthusiastic developers feeling like petulant children the week after Easter – where the relaxing of the chocolate rations has come to an end.  Now, developers are being asked to put in their credit cards to make use of Azure – even the free one.  Now I don’t know about anyone else’s experiences, but in mine ‘free’ followed by ‘credit card details please’ smells like a honey trap.

So its not enough that we have to scramble up the learning curve of Azure, install the tools and figure things out all on our own time, we now also have to hand over our credit card details to a large multinational that has a business model that keeps consumers at an arms length, is intent on making money, and may give you a bill for an indeterminable amount of computing resources consumed – all for which you are personally liable.

Gulp! No thanks, I’ll keep my credit card to myself if you don’t mind.

The nature of Azure development up until now and until adoption becomes mainstream is that most Azure development has no commercial benefit for the developers.  While some companies are working on Azure ‘stuff’, there is very little in the way of Azure apps out there in the wild and even fewer customers who are prepared to pay for Azure development… yet.  A lot of the Azure ‘development’ that I am aware of has been done by individuals, in their own time, on side projects as they play with Azure to get on the cloud wave, enhance their understanding or simply try something different.

While I understand Microsoft’s commercial aspirations, the financial commitments expected from Azure ‘hobbyists’ run the risk of choking the biggest source of interest, enthusiasm and publicity – the after hours developer.  Perhaps the people in the Azure silo who are commenting ‘Good riddance to the CTP developers, they were using up all of these VM’s and getting no traffic’ have not seen the Steve Ballmer ‘Developers! Developers! Developers!’ monkey dance that (embarrassingly) acknowledges the value of the influence that developers who are committed to a single platform (Windows).

It comes as no surprise that the number one feature voted for in the Microsoft initiated ‘Windows Azure Feature Voting Forum’ is ‘Make it less expensive to run my very small service on Windows Azure’ followed by ‘Continue Azure offering free for Developers’ – the third spot has less than a quarter as many votes.  But it seems that nobody is listening – instead they are rubbing their hands in glee, waiting for the launch and expecting the CTP goodwill to turn into credit card details.

Of course there is a limp-dicked ‘free’ account that will suggestively start rubbing up against your already captured credit card details after 25 hours of use (maybe).  There is also some half-cocked free-ish version for MSDN subscribers – for those that are fortunate enough to get their employers to hand over the keys (maybe).  So there are roundabout ways that a developer can find a way of getting themselves up and running on the Azure platform but it may just be too much hassle and risk to bother.

Personally, I didn’t expect it to happen this way, secretly hoping that @smarx or someone on our side would storm the corporate fortress and save us from their short sightedness and greed.  But alas, the regime persists – material has been produced, sales people are trained and the Microsoft Azure army is in motion.  There won’t even be a big battle.  Our insignificant little apps will simply walk up, disarmed, to their masters with their heads hung in shame and as punishment for not being the next killer app, they will be terminated – without so much as a display of severed heads in the town square.

Farewell Tweetpoll, RESTful Northwind, Catfax and others.

We weren’t given a chance to know you.  You are unworthy.

Simon

@simonmunro

More posts from me

I do most of my short format blogging on CloudComments.net. So head over there for more current blog posts on cloud computing

RSS Posts on CloudComments.net

  • Free eBook on Designing Cloud Applications
    Too often we see cloud project fail, not because of the platforms or lack of enthusiasm, but from a general lack of skills on cloud computing principles and architectures. At the beginning of last year I looked at how to address this problem and realised that some guidance was needed on what is different with […]
  • AWS and high performance commodity
    One of the primary influencers on cloud application architectures is the lack of high performance infrastructure — particularly infrastructure that satisfies the I/O demands of databases. Databases running on public cloud infrastructure have never had access to the custom-build high I/O infrastructure of their on-premise counterparts. This had led to the wel […]
  • Fingers should be pointed at AWS
    The recent outage suffered at Amazon Web Services due to the failure of something-or-other caused by storms in Virginia has created yet another round of discussions about availability in the public cloud. Update: The report from AWS on the cause and ramifications of the outage is here. While there has been some of the usual […]
  • Microsoft can do it without partners
    Microsoft’s biggest strength has always its partner network and it seemed, at least for a couple of decades, that a strong channel was needed to get your product into the market. Few remember the days where buyers only saw products in computer magazines, computer trade shows and the salespeople walking through the door — the […]
  • The significance of Linux VMs on Windows Azure
    One of the most significant, highly anticipated, and worst kept secrets of the Windows Azure spring release is the inclusion of persistent VMs, with the notable addition of support for Linux on those VMs. The significance of the feature is not that high architecturally — after all, Windows Azure applications that were specifically architected for […]

@simonmunro

Follow

Get every new post delivered to your Inbox.