Given a good enough specification software can be developed anywhere in the world, making it attractive to do it where people are cheapest, so where does this leave first world developers with expensive lifestyles?

The cost of developing software is largely in the cost of human resources.  Development workstations, tool licences, source control and bandwidth are negligible compared to the burn cost of the actual developer and it stands to reason that cheaper salaries should make for cheaper software.  The cost of people is influenced by a number of factors, such as the seniority, but what is interesting is the income demands of skilled people that have expensive lifestyles.  It is this phenomenon that has created the primary driver behind the outsourcing industry where people are clearly cheaper to employ in Bangalore than in London.

If we were to believe that same-skilled people are a fraction of the cost in another country because of their lifestyles, the obvious question to ask is why isn’t all software development outsourced and virtually no developers living in expensive parts of the world.  Making use of outsourced skills is a bit more complicated than simply mapping developer roles to commodity resources regardless of location and the quality of the final deliverable is affected by more than just the individuals’ technical abilities.

However,  the use of outsourcing models is increasing and will continue to increase.  It is the desire of ‘elite’ developers to work better and more efficiently that drives the practices and culture that allow developers to work remotely – which could be anywhere.  Use of test driven development, continuous integration, collaboration technologies, cloud deployments, GIT-style source control, Google Wave and other technologies are becoming mainstream and used, not by the select few, but the masses of developers spread all over the globe.

So why is there still on-site and insourced development work still available in London or New York?  How should developers align their careers in order to be safe from the axe when some large chunk of business gets moved to a cheaper location.  Below are some styles of development that still seem to retain the services of expensive people.

1. Shortest Route

A lot of one or two man consulting gigs that exist for a few months cannot make use of remote resources because the business wants the work done simply, fast and now. The effort and time involved in finding remote people, writing a specification, getting all the infrastructure in place and the communication latency is simply too much overhead to bother with.  It is easier to have someone who can come on site, have a meeting, sit down and get coding.  The sort of work tends to be short-term, fiddly and with little disciplined project management.

2. Big Branded Consulting

Big consulting companies have the scale to offer services that include any combination of people and skills for fairly high rates and are able to pull it off because of their brand and penetration in the customer.  Big consulting companies have some really good people (often not even technical) that are able to open up opportunities and allow commodity developers to step in and do the work – keeping the customer is happy.  (Of course they can also switch out to outsourcing models when it suits the deal.)

3. Product Centric

Being a specialist on some specific product doing ‘configuration’ development allows developers to be seen as part of the overall expense of implementing a particular package, rather than a per hour development resource.  It doesn’t even matter if the work is technically difficult, as long as it requires an intimate understanding of a product for which there are few generally available skills.

4. Integration

Getting data from one system into another, no matter how simple the technology, is no trivial task.  There are undocumented ‘business rules’ that need to be catered for, confusing mapping and changes to processes that have to be uncovered on site and in person in order to coax an interface into existence.

5. Responsive

Some businesses need to be close to their development teams, be that for real business reasons (such as financial traders) or because the business model is trying to change the world – in some cases it is good to have a trusted development team with a track record.  These teams are generally responsive to the needs of the business and are able to churn out systems and features that delight the users at a predictable and valuable pace.

6. Startup

The startup culture is more than just about getting software developed for the lowest possible price.  Startups need to be agile and responsive and all staff at a startup have to have a special relationship that goes beyond handing out development tasks.  Startups, by the nature of how they operate and market seem to need to be located physically close to where the buzz is happening in order to get attention.

7. Bleeding Edge

Some individuals or groups of developers are able to market themselves as leaders in a particular technology, stack or approach and find that work will always come to them.  This group doesn’t have it easy though – what is bleeding edge now is mainstream tomorrow due to their own efforts and they have to continually reinvent themselves and discover new paradigms to shift.  Although a lot of developers aspire to be in this category there is little room and it is highly competitive.

 

So if you are a developer sitting in a first world country wanting to justify the rate that matches your lifestyle ambitions you need to be constantly aware of the value that you are adding to the people who are paying you to write code.  If you are looking at making lots of money being a generic, commodity developer then you face stiff competition from literally millions of people who can do the same thing.

Make sure that you find a style that is valuable and has a future as you chart your career.

Simon Munro

@simonmunro

In part 1 of this series I discussed the base technologies (virtualisation, shared resources, automation and abstracted services) that are at the base of cloud computing.  Part 2 discusses the new computing models (Public Cloud, Utility Pricing, Commodity Nodes and Service Specializations) that have emerged as a result of the base technologies. Part 3 lists some of the business value that can be extracted from these new models.

This part explores some of the emerging business models, and hence target markets, that may be able to make use of the business value on offer.

Part 4 : Emerging Business Models

 

Rogue Enterprise Departments

The most boring and barely mentioned group are the rogue enterprise departments that are fed up with the inability of internal IT to meet their needs. Cloud computing allows them to build a solution quickly, under the radar and with low financial risk simply by putting in their own effort and whipping out their credit cards. But how does this facilitate the emergence of new business models? It allows enterprises, by being entrepreneurial at the departmental level, to collectively become more competitive, innovative and respond to market needs. Products can be developed quicker, cheaper and able to fail if they don’t work.

Do you want to quickly spin up a sales campaign app to pitch a new offering? There’s an app for that. Do you want to offer post sales extended warranties via a coupon in the packaging that can be redeemed online? Maybe there is an app for that too.

I think that the market for rogue enterprise cloud applications is larger than people think and the concerns and barriers from corporate risk, security and governance will be forced to adjust.

Small and Medium Business

Some cloud vendors, particularly Microsoft, believe that the largest market is the small to medium sized businesses that should rather be using cloud computing than traditional hosting. The immediate and more obvious benefits are for the smaller businesses to operate solutions that, for a low cost, have enterprise scale features such as high availability, responsiveness and reliability. It allows smaller businesses to compete head on with their larger competition by having high quality customer facing solutions or better systems for staff in the field, logistics, billing or other business processes.

What will be interesting over the next few years (probably more than five years) is how these smaller businesses start linking up to each other in value chains and providing more business services via the cloud.

Start-ups

The cloud start-up dream is to become the next Youtube or Twitter and cloud computing plays to the ambitious (and sometimes unrealistic) plans of start-ups. A start-up can use its limited funding on development and marketing without wasting it on unnecessary hardware that it would need if it Oprah mentioned them, but probably never will. A start-up can, using cloud computing, still operate from the founders’ garage as their role models did ten to twenty years ago, but operate a huge international web property. While most start-ups will never achieve their lofty dreams, cloud computing is there to support them if they do make it. Although it is unlikely to be 100% correct the first time, a properly architected cloud oriented solution could scale sufficiently to handle growth and avoid the infamous ‘fail whale’.

Emerging Markets

Finally, cloud computing is destined to provide the architectural basis for new products offered by first world organizations to emerging markets. If there is an economic shift towards countries such as India, China and Brazil, the delivery of products by organizations based in New York and London will need to be radically different, low cost and innovative. It is likely that many products will be able to be delivered via the Internet, but emerging markets do not have first world infrastructure, so delivery will have to be done using mobiles, simple interfaces, low bandwidth and low latency. Also, due to such a high dependency on a mobile device and the low margins for each sale, the (possibly free) ecosystem needs to be social, viral and low cost in delivery and marketing terms. There are many smart people around the world thinking about these products, not from a cloud computing perspective (yet), but from their own desires to open up and penetrate new markets. Products that may be delivered would be something like simple life insurance products delivered using a mobile phone on a pay-as-you-go basis. A $2 premium rate text gives you $500 of funeral cover.

Emerging markets can also take advantage of sophisticated first world individuals or social groups. Imagine a system that provides, again via a mobile device, microfinance (say $20 loans) funded by individuals in $1 increments across the United States. ‘Want to lend $10 and get $12 back? There’s an app for that.’

Relating back to the cloud computing model, there are literally billions of people that are able to be serviced by large multinationals if the product and the price is right. These products cannot use traditional delivery channels (mail, branches or call centres) as the margins are pennies. The only way to deliver them is using sophisticated, reliable and low cost IT – and that is where cloud computing plays a role.

Change and Interest

What we understand the cloud computing market to be today is different from what the reality will be 5-10 years from now – at the very least because there is confusion and conflicting messages. Hype cannot be sustained within a vacuum and there definitely is interest in cloud computing fuelling the hype, which means that there probably is a demand. Beyond the marketing material and shallow articles in the mainstream media, leaders in business are sitting down and conversing with people who know something about cloud computing and finding compelling arguments that apply to their particular business and situation.

Businesses are reeling from the financial crisis – manufacturing, shipping, travel, services, media and just about every single sector is looking at how they need to do things differently, look at new markets, manage costs, take less risks, be more responsible and many other items on the boardroom agenda that would never have been table a few years ago. Individuals are feeling the threat of collapsing industries, unemployment, financial insecurity and diminishing prospects. They too are feeling the need to do things differently and have a yearning for change. It is causing them to be more entrepreneurial, to create new businesses, to try and change enterprises from within and to elect a President that offers hope and change.

So while Information Technology has evolved at its usual (rapid) pace, change has swept across the world and something within cloud computing has resonated with that change and amplified the impact that cloud computing could have on the way we sell, buy, develop and interact with each other. Where cloud computing may have been an interesting technology sideshow in years gone by, the promise that it offers (which admittedly it may not be able to deliver on) has caught the attention of business leaders.

So people are listening, leaning forward in their chairs and conjuring up scenarios where cloud computing may work for them. They are talking, arguing, writing and conversing about a set of technologies that will fundamentally rock our approach to IT.

The question is, are you part of that conversation?

Simon Munro

@simonmunro

In part 1 of this series I discussed the base technologies (virtualisation, shared resources, automation and abstracted services) that are at the base of cloud computing.  Part 2 discusses the new computing models (Public Cloud, Utility Pricing, Commodity Nodes and Service Specializations) that have emerged as a result of the base technologies.

This part tries to understand the business value that can be extracted from these new models.  After all, without value that can be easily understood by the business, there is little point in deploying cloud computing technology.

Part 3 : Business Value

 

Fail Cheap and Fail Fast

As a combination of a number of factors, the ability to try out an idea that can ‘Fail Cheap and Fail Fast’ facilitates the creation of business cases where the IT component does not become a burden if the endeavour is unsuccessful. In the cloud, if a business does not succeed there are no expensive paid for servers sitting around idle and no hosting contracts that are paid for, but unused, like gym memberships. In the cloud, the initial financial commitment is lower and the monthly burn rate controllable. If it does not work, you simply cancel the agreement and stop paying.

Handling Growth

The ability of cloud computing solutions to handle growth allows time, effort and money on things that are more important during the initial stages, rather than on hardware and licenses that are going to sit around doing nothing for a while. It is common for the purse-holders, when receiving a request for budget to ask “How does this help revenue this quarter?” and planned, prudent and reasonable infrastructure purchases simply do not generate revenue until sales pick up. So having a platform in place to be able to demand additional resources when necessary negates the need for up front purchases. An important observation however, is that this only makes sense if growth is expected. A website that is expected to be small or self constrained (such as a corporate time keeping application that has a finite number of users) may be better suited to a Plain Ol’ Web (POW) app and forgo the cloud computing engineering costs.

Cyclic Demand

The reason why Amazon is a cloud provider is because they needed a lot of hardware to handle sales during the Christmas season, which sat idle for the rest of the year, and this spare capacity started to be sold off on the cloud. Many businesses have similar situations where there is peak or cyclic (per day, season etc.) demand, such as the Christmas rush, or unpredictable demand, where the site is suddenly mentioned by Oprah. Peak demand periods are important for businesses. It is often the time when the first time customers, which have cost a lot of marketing money to attract, who visit the site and expect a positive experience. Cloud computing caters specifically and overtly to the handling of peak demand periods.

Managing Risk

Due to features that are part of cloud computing solutions, a lot of risk can be taken care of out of the box, so in many respects cloud computing can be seen as part of the solution to managing risk – operational, reputational, disasters and so on. Although cloud computing security could increase risk many other fundamental requirements and features of cloud computing platforms, such as backups, availability, patching, load balancing, scalability and others (in an automated, zero touch manner) does tick some risk management boxes.

Time to Market

In a competitive market, a product’s development cycle and time to market is key to the viability and planning of a product. Having to pad the launch by a few months because of the provisioning of IT could scupper the entire product proposal. While IT has a tendency and history of not delivering on time, cloud computing can, in some cases, reduce the time to deliver, particularly if the alternative involves a long hardware, software and networking procurement process.

Operational Expenditure

Because the cloud computing is about consumption of units of computing that are billed monthly (or daily, or some other period), the idea that computing costs are operational expenditure rather than capital expenditure is often touted as a benefit of cloud computing. While true and relevant in some cases, financial models and needs of businesses cannot be generally applied. Different businesses have different (and complex) financial models that may or may not find the capex of IT hardware a decisive issue.

Enterprise IT Backlog

Lurking within all businesses is the dissatisfaction in the rate of delivery of centralised IT which seldom has the skills and resource bandwidth to cope with the torrent of new business requirements and applications. Rather than having their particular needs sit for months or years in the enterprise IT backlog, disgruntled and impatient business units are taking their budgets to external organizations for fulfilment. The tradition of getting and external development company to develop bespoke solutions and force enterprise IT to install and support it will be replaced by development, support and operations being completely off site. Leaving internal IT in the dark and toothless. Salesforce.com has ridden this demand and many cloud providers will cater to these rogue bespoke solutions.

Domain Specific Clouds

Hollywood studios have a need to do a lot of CG rendering towards the end of movie production when time is running out. Having the necessary horsepower sitting around for when it is needed is expensive and quickly becomes redundant so studios hand rendering over to third parties that have huge capacities to take on particular jobs. While not cloud computing per se (for example, I am sure they ship data on rather large hard disks rather than use the internet) the idea of having specialised processing service that offer more than just computing power is beginning to be embraced into the cloud computing landscape and the term ‘Domain Specific Cloud’ is being tossed around. A more common example is data mining, which addresses a whole lot of services including forensics, fraud detection, deduplication and other value added services that are a lot more than just raw computing.

So what are these new ways of doing business that are emerging as a result of the value that can be realised from cloud computing?

Continue to part 4 : ‘Emerging Business Models’

Simon Munro

@simonmunro

In part 1 of this series I discussed the base technologies (virtualisation, shared resources, automation and abstracted services) that are at the base of cloud computing.  This part deals with how those base technologies have allowed us to envision and adopt new computing models that are central to the cloud computing movement.

Part 2 : Computing Models

 

Public Cloud

From the perspective of the consumer, as long as they satisfy the requirements, any external supplier can provide the demanded computing as the cost and effort of building on premise on demand computing facilities may be overkill for many businesses. As a result, large providers of computing resources are stepping in to provide cloud computing to anybody that wants it and is willing to pay. This does not disqualify the value proposition of the private cloud, but it is the public cloud providers, such as Amazon, that have been pushing the change in computing models.

Utility Pricing

If consumers require computing resources on demand, it is logical to expect that they only want to pay for those resources when they need them and while they are in use. The pricing of cloud computing is still in its infancy and sometimes quite complicated, but the idea is that consumers pay as they would for any utility like electricity, rather than pay for a whole lot of physical assets that they may or may not use. This has the potential to radically change how businesses serve customers and process data as planning is done and decisions are made based, not on upfront costs, but on dynamic usage cycles and different types and rates of billing.

Commodity Nodes

Providers of these on-demand resources would, for technical and practical reasons, rather not provide highly specialised resources. It is very difficult to provide an expensive and depreciating high-end server with loads of memory and fast IO or to provide a machine with a sophisticated graphics processor. Without the provision of specialised components, regardless of the underlying infrastructure (which may or may not be assembled out of high-end components) the resources provided are straightforward an anaemic. This changes application architectures because dedicated and powerful single node servers are not available and architects cannot make assumptions about the availability and reliability of individual nodes.

Service Specializations

There is a difference between a consumer that requires an email service and one that requires a database service so providers of computing resources need to cater to different markets. Because of the underlying approach and technology, providers generally have one particular service abstraction and the different cloud specializations, IaaS, SaaS, PaaS and others have emerged and used to identify the class of cloud computing offering.

If we consider that cloud computing is simply a logical progression of IT technologies, what is it that grabbed the attention of the market and caused vendors to invest so much money in new products and huge datacentres? The reason is that cloud computing opens up new ways of conducting and operating a business and using technology to tackle new markets.

Before looking at the types of businesses that are intrigued by cloud computing, we need to understand the value that businesses see in the cloud. While technologists may find it surprising, not everybody wants to play with cloud computing just because it is shiny and new. It seems that businesses want value in the form of cost savings, reduced risk, increased turnover, and others in order to move systems and infrastructure onto the cloud.

Continue to part 3 : ‘Business Value’

Simon Munro

@simonmunro

The cloud is hype.

It is the hype around a logical step in the progression of IT and somehow the term ‘The Cloud’ has stuck in the minds of vendors, the media and, to a lesser extent, the customer.

Unlike most terms that we IT is used to, ‘The Cloud’ is not specific – a customer is never going to want to ‘buy a cloud’ and nobody can, with any authority, say what the cloud is. Disagreement exists on the definition of the cloud and cloud computing – academics, vendors, analysts and customers all disagree to varying degrees. This creates confusion as well as opportunities – where every blogger, journalist, vendor, developer and website can slap a cloud sticker on their product, service, website, marketing material and even the forehead of their marketing VP, and deem it to be ‘The Cloud’ or ‘<some new form> Cloud’.

In a world of no definitions, any definition is valid.

So while I am loathe to add yet another definition to the world of cloud computing, it seems that any conversation about cloud computing starts with some common understanding about what the base concepts and principles are.  I tackled the question which asks “If cloud computing is based on existing technologies, why has it suddenly become important and talked about only recently?”.

I believe that the answer is that the base technologies have matured, leading to new computing models, which business is able to realise the value and finally it leads to new computing models based, if you trace it back, to the technologies which we talk about as being part of cloud computing. 

I have written an essay on this an broken it down into four parts, reflecting the layers and progression, and I will post this over the next few days.

Part 1 : Base Technology

 

At its most basic, cloud computing is about rapidly providing and disposing of computer resources quickly, easily and on demand.

Think about Mozy backup – you can get backup for your PC in a few minutes without having to go out and buy a backup disk, plug it in, power it up, install drivers, format, etc. Instead, you download a piece of software, put in your credit card details and ta-da, you have a good backup solution until you don’t want it anymore, in which case you simply cancel the service and you don’t have an external disk lying around that needs to be disposed of. The Mozy example demonstrates computing resources (backup) provisioned rapidly (no waiting for hardware and no hardware setup) that is almost immediately available and can be disposed of just as fast. It is, by a broader definition, Cloudy.

Unfortunately, instantly providing computing resources is not easy as one would think (as anyone who has seen data centre lead times is aware), so the seemingly simple objective of providing computing resources utilises some base technologies that are generally considered part of cloud computing.

It is the base technologies that have gradually matured over time that have given us the ability to achieve the goal of utilizing computing resources easily, and the following four are the primary influencing technologies.

Virtualization

Obviously, if you want resources and want them now, it doesn’t make sense to have to physically get a new machine, install it in a rack, plug it in and power it up. So a virtual machine that can be spun up within a couple of minutes is key to the ability to provide for the demand. Virtualization also forces the removal of specialized equipment on which software may depend by providing a baseline, non-specialized, machine abstraction.

Shared Resources

Individual resource consumers do not want to buy their resources up front – it would go against the idea of ‘on demand’. So it makes sense that it would be better to create a pool of resources that are potentially available to everyone and are allocated and de-allocated to individual consumers’ needs.  Multi-tenancy is a further concept behind the sharing of resources, where multiple customers can share a single physical resource at the same time.  Virtual machines running on the same physical hardware is an example of multi-tenancy.

Automation

In order to make all of these shared, virtualized resources available on demand, some automation tools need to sit between the request for a resource and the fulfilment of the request – it has to be zero touch by an expensive engineer. Sending an email and waiting for someone in operations to get around to it is not exactly rapid provisioning. So a big part of cloud computing are the tools and infrastructure to spin up machines, bring new hardware online, handle failures, patch software, de-allocate and decommission machines and resources etc.

Abstracted Services

Computing resources need not be limited to specific low level hardware resources such as an addressable memory block or a spindle on a disk – not only is it generally unnecessary, but technically impossible if coupled with quick, on demand resources. A fundamental technology advancement of the cloud is the increased use and availability of abstracted computing resources (consumed as services). While virtual machine is an abstraction of a much more complicated physical layer, the abstractions become much higher-level where resources are exposed as services, so a consumer doesn’t ask for a specific disk, but rather requests resources from a storage service where all of the complicated stuff is abstracted away and taken care of.

These technical solutions to the demand problem have, in turn, had some interesting side effects on existing models of computing. The public cloud, utility pricing, commodity nodes and service specializations have emerged as rediscovered computing models that are driving the adoption of cloud technologies.

Continue to part 2 : ‘Computing Models’

Simon Munro

@simonmunro

As the official release of Azure looms, and the initial pricing model is understood, a lot of technical people are crunching numbers to see how much it will cost to host a solution on Azure.  It seems that most of the people doing the comparisons are doing them against smaller solutions to be hosted, not in some corporate on-premise data centre, but on any one of hundreds of public .net hosting providers out there.

This is not surprising since the type of person that is looking at the pre-release version of Azure is also the kind of person that has hundreds of ideas for the next killer website, if only they could find the time and find someone who is a good designer to help them (disclaimer:  I am probably one of those people).  So they look at the pricing model from the perspective of someone who has virtually no experience in running a business and is so technically capable that they have misconceptions about how a small business would operate and maintain a website.

Unsurprisingly they find that Azure works out more expensive than the cost of (perceived) equivalent traditional hosting. So you get statements like this:

“If you add all these up, that’s a Total of $98.04! And that looks like the very minimum cost of hosting an average "small" app/website on Azure. That surely doesn’t make me want to switch my DiscountASP.NET and GoDaddy.com hosting accounts over to Windows Azure.” Chris Pietschmann

Everyone seems shocked and surprised.

Windows Azure is different from traditional hosting, which means that Microsoft’s own financial models and those of their prospective customers are different.  You don’t have to think for very long to come up with some reasons why Microsoft does not price Azure to compete with traditional hosting…

  • Microsoft is a trusted brand.  Regardless of well publicised vulnerabilities (in the technical community) and a growing open source movement, in the mind of business Microsoft is considered low risk, feature rich and affordable.
  • Microsoft has invested in new datacentres and the divisions that own them need to have a financial model that demonstrates a worthwhile investment.  I doubt that in the current economic climate Wall Street is ready for another XBox-like loss leader. (This is also probably the reason why Microsoft is reluctant to package an on-premise Azure)
  • Azure is a premium product that offers parts of the overall solution that are lacking in your average cut-rate hosting environment.

Back to the alpha geeks that are making observations about the pricing of Azure.  Most of them have made the time to look at the technology outside their day job.  They either have ambitions to do something ‘on their own’, are doing it on the side in a large enterprise or, in a few cases, are dedicated to assessing it as an offering for their ISV.

They are not the target market.  Yet.

Azure seems to be marketed at the small to medium businesses that do not have, want or need much in the way of internal, or even contracted, IT services and skills.  Maybe they’ll have an underpaid desktop support type of person who can run around the office getting the owner/manager’s email working – but that is about it. (Another market is the rogue enterprise departments that, for tactical reasons, specifically want to bypass enterprise IT – but they behave similar to smaller businesses.)

Enterprise cloud vendors, commentators and analysts endlessly debate the potential cost savings of the cloud versus established on-premise data centres.  Meanwhile, smaller businesses, whose data centre consists of little more than a broadband wireless router and a cupboard, don’t care much about enterprise cloud discussions.  In addressing the needs of the smaller business, Windows Azure comes with some crucial components that are generally lacking in traditional hosting offerings:

  • As a Platform as a Service (PaaS), there are no low level technical operations that you can do on Azure – which also means that they are taken care of for you.  There is no need to download, test and install patches.  No network configuration and firewall administration.  No need to perform maintenance tasks like clearing up temporary files, logs and general clutter.  In a single tenant co-location hosting scenario this costs extra money as it is not automated and requires a skilled person to perform the tasks.
  • The architecture of Azure, where data is copied across multiple nodes, provides a form of automated backup.  Whether or not this is sufficient (we would like a .bak file of our database on a local disk), the idea and message that it is ‘always backup up’ is reassuring to the small business.
  • The cost/benefit model of Azure’s high availability (HA) offering is compelling.  I challenge anybody to build a 99.95% available web and database server for a couple of hundred dollars a month at a traditional hosting facility or even in a corporate datacentre (this is from the Azure web SLA and works out to 21 minutes of downtime a month).  The degree of availability of a solution needs to be backed up by a business case and often, once the costs are tabled, business will put up with a day or two of downtime in order to save money.  Azure promises significant availability in the box and at the price could be easily justified against the loss of a handful of orders or even a single customer.
  • Much is made of the scalability of Azure and it is a good feature to have in hand for any ambitious small business and financially meaningful for a business that has expected peaks in load.  Related to the scalability is the speed at which you can provision a solution on Azure (scaling from 0 to 1 instances).  Being able to do this within a few minutes, together with all the other features, such as availability, is a big deal because the small business can delay the commitment of budget to the platform until the last responsible moment.

So there are a whole lot of features that need to be communicated to the market – almost like ‘you qualify for free shipping’ when buying a book online, where the consumer is directed to the added value that they understand.

The catch is that the target market does not understand high availability the same way that everyone understands free shipping.  The target market for Azure doesn’t even know that Azure exists, or care – they have a business to run and a website to launch.  Those technical details need to be sorted out by technical people who need to produce the convincing proposal.

The obvious strength that Microsoft has over other cloud vendors is their channel.  Amazon and Google barely have a channel for sales, training and development of cloud solutions – besides, that is not even their core business.  Microsoft has thousands of partners, ISV’s, trainers and a huge loyal following of developers. 

In targeting the small to medium business, Microsoft is pitching Azure at the ISV’s.  The smaller business without internal development capabilities will turn to external expertise, often in the shape of a reputable organization (as opposed to contractors), for solutions – and the ISV’s fulfil that role.  So to get significant traction on Azure, Microsoft needs to convince the ISV’s of the benefits of Azure and, as this post tries to illustrate, some of the details of the financial considerations of the small business and their related technology choices.

Microsoft needs to convince the geeks out there that there is a whole lot more that comes with Azure, that is very important to smaller businesses, that are not available from traditional hosting. So Microsoft needs to help us understand the costs, and not just the technology, in order for us to convince our customers that although Azure is not cheap, it makes good financial sense.

Simon Munro

@simonmunro

I like to ride motorbikes.  Currently I ride a BMW K1200S – a sports tourer that is both fast and comfortable on the road.  Before that I had a five year affair with a BMW R1150GS which took me to all sorts of off-the-beaten-track destinations before we abruptly parted company with me flying through the air in one direction as my bike was smashed in the other direction by criminals in a getaway car.

Most motorbike enthusiasts have, like me, owned a few in their lifetimes and in most cases they are of differing types.  A road bike, no matter how much you are prepared to spend, can barely travel faster than walking pace on a good quality dirt road because, apart from the obvious things like tyres and suspension, the geometry is all wrong.  The converse is similar – a good dirt bike is frustrating, dull and downright dangerous to ride on a road.

Bikers understand the issues around suitability for purpose and compromise more than most (such as car drivers).  Our lottery winning fantasies have a motorbike garage filled, not simply with classics or expense, but with a bike suitable for every purpose and occasion – track, off-road, touring, commuting, cafe racing and every other obvious niche.  Some may even want a Harley Davidson for the odd occasion that one would want to ride a machine that leaks more oil than fuel it uses and one would want to travel in a perfectly straight line for 200 yards before it overheats and the rider suffers from renal damage.

But I digress.  Harley Davidson hogs, fanbois (or whatever the collective noun is for Harley Davidson fans) can move on.  This post has nothing to do with you.

There is nothing in the motorbike world that is analogous to the broad suitability of the SQL RDBMS.  SQL spans the most simple and lightweight up to complex, powerful and expensive – with virtually every variation in between covered.  It is not just motorbikes, a lot of products out there would want such broad suitability – cars, aeroplanes and buildings.  SQL is in a very exclusive club of products that is solves such a broad range of the same problem, and in the case of SQL, that problem is data storage and retrieval.  Also SQL seems to solve this problem in a way that the relationships between load, volume, cost, power and expense is fairly linear.

SQL’s greatest remaining strength and almost industry wide ubiquity is that it is the default choice for storing and retrieving data.  If you want to store a handful of records, you might as well use a SQL database, not text files.  And if you want to store and process huge amounts of transactional data, in virtually all cases, a SQL database is the best choice.  So over time, as the demands and complexity of our requirements has grown, SQL has filled the gaps like sand on a windswept beach, and exclusively filled every nook and cranny.

We use SQL for mobile devices, we use SQL for maintaining state on the web, we use SQL for storing rich media, and use it to replicate data around the world.  SQL has, as it has been forced to satisfy all manner of requirements, been used, abused, twisted and turned and generally made to work in all scenarios.  SQL solutions have denormalization, overly complex and inefficient data models with thousands of entities, and tens of thousands of lines of unmaintainable database code. But still, surprisingly, it keeps on giving as hardware capabilities improve, vendors keep adding features and people keep learning new tricks.

But we are beginning to doubt the knee jerk implementation of SQL for every data storage problem and, at least at the fringes of its capabilities, SQL is being challenged.  Whether it be developers moving away from over-use of database programming languages, cloud architects realising that SQL doesn’t scale out very well, or simply CIO’s getting fed up with buying expensive hardware and more expensive licences, the tide is turning against SQL’s dominance.

But this post is not an epitaph for SQL, or another some-or-other-technology is dead post.  It is rather an acknowledgement of the role that SQL plays – a deliberate metronomic applause and standing ovation for a technology that is, finally, showing that it is not suitable for every conceivable data storage problem.  It is commendable that SQL has taken us this far, but the rate at which we are creating information is exceeding the rate at which we can cheaply add power (processing, memory and I/O performance) to the single database instance.

SQL’s Achilles heel lies in its greatest strength – SQL is big on locking, serial updates and other techniques that allow it to be a bastion for consistent, reliable and accurate data.  But that conservative order and robustness comes at a cost and that cost is the need for SQL to run on a single machine.  Spread across multiple machines, the locking, checking, index updating and other behind the scenes steps suffer from latency issues and the end result is poor performance.  Of course, we can build even better servers with lots of processors and memory or run some sort of grid computer, but then things start getting expensive – ridiculously expensive, as heavy metal vendors build boutique, custom machines that only solve today’s problem.

The scale-out issues with SQL have been known for a while by a small group of people who build really big systems.  But recently the problems have moved into more general consciousness by Twitter’s fail-whale, which is largely due to data problems, and the increased interest in the cloud by developers and architects of smaller systems.

The cloud, by design, tries to make use of smaller commodity (virtualized) machines and therefore does not readily support SQL’s need for fairly heavyweight servers.  So people looking at the cloud find that although there are promises that their application will port easily, are obviously asking how they bring their database into the cloud and finding a distinct lack of answers.  The major database players seem to quietly ignore the cloud and don’t have cloud solutions – you don’t see DB2, Oracle or MySQL for the cloud and the only vendor giving it a go, to their credit (and possibly winning market share), is Microsoft with SQL Server.  Even then, SQL Azure (the version of SQL Server that runs on Azure) has limitations, and size limitations that are indirectly related to the size of the virtual machine on which it runs.

Much is being made of approaches to get around the scale out problems of SQL and with SQL Azure in particular, discussions around a sharding approach for data.  Some of my colleagues were actively discussing this and it led me to weigh in and make the following observation:

There are only two ways to solve the scale out problems of SQL Databases

1. To provide a model that adds another level of abstraction for data usage (EF, Astoria)

2. To provide a model that adds another level of abstraction for more complicated physical data storage (Madison)

In both cases you lose the “SQLness” of SQL.

It is the “SQLness” that is important here and is the most difficult thing to find the right compromise for.  “SQLness” to an application developer may be easy to use database drivers and SQL syntax; to a database developer it may be the database programming language and environment; to a data modeller it may be foreign keys; to a DBA it may be the reliability and recoverability offered by transaction logs.  None of the models that have been presented satisfy the perspectives of all stakeholders so it is essentially impossible to scale out SQL by the definition of what everybody thinks a SQL database is.

So the pursuit of the holy grail of a scaled out SQL database is impossible.  Even if some really smart engineers and mathematicians are able to crack the problem (by their technically and academically correct definition of what a SQL database is), some DBA or developer in some IT shop somewhere is going to be pulling their hair out thinking that this new SQL doesn’t work the way it is supposed to.

What is needed is a gradual introduction of the alternatives and the education of architects as to what to use SQL for and what not to – within the same solution.  Just like you don’t need to store all of your video clips in database blob fields, there are other scenarios where SQL is not the only option.  Thinking about how to architect systems that run on smaller hardware, without the safety net of huge database servers, is quite challenging and is an area that we need to continuously discuss, debate and look at in more detail.

The days are the assumption that SQL will do everything for us is over and, like motorcyclists, we need to choose the right technology or else we will fall off.

Simon Munro

@simonmunro

Database sharding, as a technique for scaling out SQL databases, has started to gain mindshare amongst developers.  This has recently has been driven by the interest in SQL Azure, closely followed by disappointment because of the 10GB database size limitation, which in turn is brushed aside by Microsoft who, in a vague way, point to sharding as a solution to the scalability of SQL Azure.  SQL Azure is a great product and sharding is an effective (and successful) technique, but before developers that have little experience with building scalable systems are let loose on sharding (or even worse, vendor support for ‘automatic’ sharding), we need to spend some time understanding what the issues are with sharding, the problem that we are trying to solve, and some ways forward to tackle the technical implementation.

The basic principles of sharding are fairly simple.  The idea is to partition your data across two or more physical databases so that each database (or node) has a subset of the data.  The theory is that in most cases a query or connection only needs to look in one particular shard for data, leaving the other shards free to handle other requests.  Sharding is easily explained by a simple single table example.  Lets say you have a large customer table that you want to split into two shards.  You can create the shards by having all of the customers who’s names start with ‘A’ up to ‘L’ in one database and another for those from ‘M’ to ‘Z’, i.e. a partition key on the first character of the Last Name field.  With 13 characters in each shard you would expect to have an even spread of customers across both shards but without data you can’t be sure – maybe there are more customers in the first shard than the second, and maybe you particular region has more in one than the other. 

Lets say that you think that it will be better to shard customers by region to get a more even split and you have three shards; one for the US, one for Europe and one for the rest of the world.  Although unlikely, you may find that although the number of rows is even that the load across each shard differs.  80% of your business may come from a single region or even if the amount of business is even, that the load will differ across different times of the day as business hours move across the world.  The same problem exists across all primary entities that are candidates for sharding.  For example, your product catalogue sharding strategy will have similar issues.  You can use product codes for an even split, but you may find that top selling products are all in one shard.  If you fix that you may find that top selling products are seasonal, so today’s optimal shard will not work at all tomorrow.  The problem can be expressed as

The selection of a partition key for sharding is dependant on the number of rows that will be in each shard and the usage profile of the candidate shard over time.

Those are some of the issues just trying to figure out your sharding strategy – and that is the easy part.  Sharding seems to have a rule that the application layer is responsible for understanding how the data is split across each shard (where the term ‘partition’ is applied more to the RDBMS only and partitioning is transparent to the application).  This creates some problems:

  • The application needs to maintain an index of partition keys in order to query the correct database when fetching data.  This means that there is some additional overhead – database round trips, index caches and some transformation of application queries into the correctly connected database query.  While simple for a single table, it is likely that a single object may need to be hydrated from multiple databases and figuring out where to go and fetch each piece of data, dynamically (depending on already fetched pieces of data), can be quite complex.
  • Any sharding strategy will always be biased towards a particular data traversal path.  For example, in a customer biased sharding strategy you may have the related rows in the same shard (such as the related orders for the customer).  This works well because the entire customer object and related collections can be hydrated from a single physical database connection, making the ‘My Orders’ page snappy.  Unfortunately, although it works for the customer oriented traversal path, the order fulfilment path is hindered by current and open orders being scattered all over the place.
  • Because the application layer owns the indexes and is responsible for fetching data the database is rendered impotent as a query tool because each individual database knows nothing about the other shards and cannot execute a query accordingly.  Even if there was shard index availability in each database, then it would trample all over the domain of the application layers’ domain, causing heaps of trouble.  this means that all data access needs to go through the application layer , which create a lot of work to implement an object implementation of all database entities, their variations and query requirements.  SQL cannot be used as a query language and neither can ADO, OleDB or ODBC be used – making it impossible to use existing query and reporting tools such as Reporting Services or Excel.
  • In some cases, sharding may be slower.  Queries that need to aggregate or sort across multiple queries will not be able to take advantage of heavy lifting performed in the database.  You will land up re-inventing the wheel by developing your own query optimisers in the application layer.

In order to implement sharding successfully we need to deal with the following:

  1. The upfront selection of the best sharding strategy.  What entities do we want to shard?  What do we want to shard on? 
  2. The architecture and implementation of our application layer and data access layer.  Do we roll our own?  Do we use an existing framework?
  3. The ability to monitor performance and identify problems with the shards in order to change (and re-optimise) our initially chosen sharding strategy over time as the amount of data and usage patterns change over time.
  4. Consideration for other systems that may need to interface with our system, including large monolithic legacy systems and out-of-the-box reporting tools.

So some things to think about if you are considering sharding:

  • Sharding is no silver bullet and needs to be evaluated architecturally, just like any other major data storage and data access decision.
  • Sharding of the entire system may not be necessary.  Perhaps it is only part of the web front-end that needs performance under high load that needs to be sharded and the backoffice transactional systems don’t need to be sharded at all.  So you could build a system that has a small part of the system sharded and migrates data to a more traditional model (or data warehouse even) as needed.
  • Sharding for scalability is not the only approach for data – perhaps some use could be made of non-SQL storage.
  • The hand coding of all the application objects may be a lot of work and difficult to maintain.  Use can be made of a framework that assists or a code generation tool could be used.  However, it has to be feature complete and handle the issues raised in this post.
  • You will need to take a very careful approach to the requirements in a behavioural or domain driven style.  Creating a solution where every entity is sharded, every object is made of shards, and every possible query combination that could be thought up is implemented is going to be a lot of work and result in a brittle unmaintainable system.
  • You need to look at your database vendors’ support of partitioning.  Maybe it will be good enough for your solution and you don’t need to bother with sharding at all.
  • Sharding, by splitting data across multiple physical databases, looses some (maybe a lot) of the essence of SQL – queries, data consistency, foreign keys, locking.  You will need to understand if that loss is worthwhile – maybe you will land up with a data store that is too dumbed down to be useful.

If you are looking at a Microsoft stack specifically, there are some interesting products and technologies that may affect your decisions.  These observations are purely my own and are not gleaned from NDA sourced information.

  • ADO.NET Data Services (Astoria) could be the interface at the application level in front of sharded objects.  It replaces the SQL language with a queryable RESTful language.
  • The Entity Framework is a big deal for Microsoft and will most likely, over time, be the method with which Microsoft delivers sharding solutions.  EF is destined to be supported by other Microsoft products, such as SQL Reporting Services, SharePoint and Office, meaning that sharded EF models will be able to be queried with standard tools.  Also, Astoria supports EF already, providing a mechanism for querying the data with a non SQL language.
  • Microsoft is a pretty big database player and has some smart people on the database team.  One would expect that they will put effort into the SQL core to better handle partitioning within the SQL model.  They already have Madison, which although more read-only and quite closely tuned for specific hardware configurations, offers a compelling parallelised database platform.
  • The Azure platform has more than just SQL Azure – it also has Azure storage which is a really good storage technology for distributed parallel solutions.  It can also be used in conjunction with SQL Azure within an Azure solution, allowing a hybrid approach where SQL Azure and Azure Storage play to their particular strengths.
  • The SQL azure team has been promising some magic to come out of the Patterns & Practices team – we’ll have to wait and see.
  • Ayende seems to want to add sharding to nHibernate.

Database sharding has typically been the domain of large websites that have reached the limits of their own, really big, datacentres and have the resources to shard their data.  The cloud, with small commodity servers, such as those used with SQL Azure, has raised sharding as a solution for smaller websites but they may not be able to pull off sharding because of a lack of resources and experience.  The frameworks aren’t quite there and the tools don’t exist (like an analysis tool for candidate shards based on existing data) – and without those tools it may be a daunting task.

I am disappointed that the SQL Azure team throws out the bone of sharding as the solution to their database size limitation without backing it up with some tools, realistic scenarios and practical advice.  Sharding a database requires more than just hand waving and PowerPoint presentations and requires a solid engineering approach to the problem.  Perhaps they should talk more to the Azure services team to offer hybrid SQL Azure and Azure Storage architectural patterns that are compelling and architecturally valid.  I am particularly concerned when it is offered as a simple solution to small businesses that have to make a huge investment in a technology and and architecture that they are possibly unable to maintain.

Sharding will, however, gain traction and is a viable solution to scaling out databases, SQL Azure and others.  I will try and do my bit by communicating some of the issues and solutions – let me know in the comments if there is a demand.

Simon Munro

@simonmunro

I originally published ‘The Usual Suspects’ in August 2006 and at the time it seemed to strike a chord with the beginning of the anti-architect revolution.  In the three intervening years the use of the ‘architect’ label has become a title that any self-respecting developer doesn’t want and is synonymous with someone is no good at software development.  I thought that on the three year anniversary of the original post it was time for un update by including the architects that have emerged more recently. Also, apologies for the masculine reference to the architects – there are some women out there doing bad architecture as well.

The Hammer Architect

The Hammer Architect knows one tool reasonably well, although because he may also be a Non-Coding Architect, may only know the tool that everyone stopped using five years ago.  The Hammer Architect sees every problem as a nail perfectly suited to his hammer and bangs away relentlessly like Bender in a steel drum with Paris Hilton.  Hammer Architects leave behind solutions that had promise after two weeks (when the prototype was delivered) but months into the project seem not to have moved beyond the initial prototype because the wrong problem is being solved.

How to spot The Hammer Architect

Hammer Architects are closely aligned with the marketing arm of the hammer vendor and, when the project is going awry, are able to wheel in a technology specialist from the vendor’s marketing department who is able to reinvigorate the sponsors and confirm that The Hammer Architect is doing a grrrreat job.  You will also find that The Hammer Architect will send you links to nicely formatted case studies on the success of his hammer as implemented in a different country, a different region, language and currency and in a different industry that apparently solves a problem completely unrelated to your own.

The Community Architect

The Community Architect thinks that his customers are impressed with his ‘Architect’ title and assumes that the technical community will also be suitably impressed.  He stalks unsuspecting user groups and conferences offering to do presentations where the subject may look compelling but is delivered blandly and where the entire presentation is so lacking in anything remotely compelling that half of the audience are asleep and the other half are engrossed in their mobiles looking out for interesting tweets.  The Community Architect doesn’t get much feedback and people are polite in the breaks, ushering him on his way, thankful thanks he will present next month to a different user group.

How to spot The Community Architect

The Community Architect is generally quite senior at a small ‘consulting company’ with an unremarkable name that, although on every slide, is forgotten three minutes into the presentation.  A Google search of his name returns links to a guy from a country town that is suspected of beating his mother-in-law to death with a with a hosepipe and, since you can’t find any technical references, content or blog for The Community Architect, you begin to think that it is the same person.  The first slide in their presentation has the word ‘Architect’ in the title and ‘Architecture’ on virtually every slide.  If you stayed around long enough to collect a business card you would see that their title contains all of the following words – ‘Architect’, ‘Consultant’ and ‘Senior’.

The Non-Coding Architect

The Non-Coding Architect simply became so good at coding that he reached a spiritual coding nirvana where his mastery of code was so high, and so pure, that he had to move to another plane of coding consciousness and leave the code behind altogether.  Because his code was so pure he can understand all technical problems just by reading a product announcement on the vendors’ website, watching a video and meditating.  He is able to pluck the essence of the solution out of the aether, which in turn gets handed down to the unwashed developers for implementation.

How to spot the Non-Coding Architect

Non-Coding Architects are difficult to spot because they masquerade as Enterprise Architects and can even produce documentation or blog posts that give you the impression that they have written code in the last few years.  The easiest way to identify a Non-Coding Architect is to invite them (in a grovelling manner of course) to help you solve a programming problem you are having – right there in the IDE.  The Non-Coding Architect will not grab your keyboard and push you out of your chair, but will feign an almost solution that he needs to go and try out on his machine before he gets back to you – all while making suggestions on how to adhere to his coding standards guidelines.

The Driven Development Architect

The Driven Development (*DD) Architect has moved beyond TDD, BDD and DDD and is using the latest DD technique that ‘everybody’ (being the four subscribers to the SomethingDD Google group) is using and will radically change how we do development in the future.  He has a repertoire of at least 26 DD techniques and is developing support for UDD (Unicode Driven Development) to support even more techniques.  He is probably working, at this very moment, on a book and seminars called ‘Design Driven Development and Development Driven Design’ but is struggling with the approach because Eric Evans got to the ‘D’ first.

How to spot the Driven Development Architect

Driven Development architects are easy to spot because they use ‘DD’ and ‘Driven Development’ frequently in conversation, blog posts and tweets.  They always seem to introduce new DD techniques based on a very advanced and new framework or approach that is documented in two unindexed blog posts and seven tweets.  Driven Development architects interact with real development teams in the confidence that they are better than mere TDDers but when hanging out with other Driven Development architects they tend to fight a lot – mostly about how much better SomethingDD is than AnotherDD and who’s DD should get a particular letter of the alphabet (although unicode should improve this)

The Reluctant Architect

The Reluctant Architect is simply a good technical person that is called an architect because a) he was the most senior developer on the team when the previous architect quit or b) he was at the same pay scale for the last three years and ‘Architect’ or ‘Presales consultant’ were the only career paths available or c) his employer parades him as an ‘Architect’ in front of the customer in order to get better rates.  The reluctant architect does, surprisingly, actually do architecture but simply considers it part of building solutions.

How to spot the Reluctant Architect

Reluctant Architects are difficult to spot because they don’t actually tell you that they are architects.  The best way to uncover a Reluctant Architect is to look for someone that doesn’t claim to be one, does architecture and is indicated as being an architect on their business card or LinkedIn profile.  They also frequently deride self-proclaimed architects in conversations and posts such as this one.

Below are the original Usual Suspects from 2006…

The PowerPoint Architect

By far the most common type of architect is The PowerPoint Architect, these kinds of architects produce the best looking architectures on paper… I mean PowerPoint.  Great colours, no crossing lines and reasonably straightforward to implement… apparently.  The problem with PowerPoint architects is that they are so far removed from real implementation that architectures that they propose simply won’t work.  The PowerPoint Architect is generally a consultant who, just before implementation is about to start, picks up their slides and moves to the next project – leaving everyone else to implement their pretty diagrams.  The PowerPoint Architect believes that software development is similar to doing animations in PowerPoint and infrastructure is about how to get your notebook connected to a data projector.

How to spot The PowerPoint Architect

The PowerPoint Architect gives him/herself away by scheduling presentations in meeting rooms and having so many slides that there is no time to go into the detail.  If the meeting has more business and project representatives than technical staff, it was probably organized by The PowerPoint Architect so that technical questions seem out of place and should be ‘taken off-line’.  The PowerPoint Architect has also been known to use Visio.

The Matrix Architect

Named after ‘The Architect’ in the Matrix movie series, The Matrix Architect has been there so long that he/she doesn’t know any other way.  Matrix Architects leaves no room for improvement, discussion or negotiation as the architecture was written by them eons ago and has worked fine, thank you very much.  Much like the scene in The Matrix Reloaded, The Matrix Architect has a personalised, well defended office and if you manage to get in, you simply have to leave by one of two doors – without getting a chance to explain yourself.

How to spot The Matrix Architect

The Matrix Architect normally has their own office and is well settled.  Technical books on CORBA, Betamax and other has-been technologies are proudly displayed on the shelves.  The Matrix Architect can also be spotted by their uncanny ability to work their way into meetings and throw curveball comments like “That’s just like the SGML interface that we used on DECT and in my day…”

The Embedded Architect

The Embedded Architect creates architectures that are so huge and complex that removing them is similar to taking out your own liver.  Most of the time they do this for career stability or, if they come from an external organization are there to milk as much future profit out of projects as possible.

How to spot The Embedded Architect

The Embedded Architect is very difficult so spot during the embryonic stage when they are infecting the existing architecture and often once spotted it is too late.  The Embedded Architect often has a team of disciples that as a group understand the entire architecture, but individually know very little.  A requirement that new team members go on an induction course on the architecture is a sign that there may be an Embedded Architect somewhere within the organization.

The Hardware Vendor Architect

The Hardware Vendor Architect is actually a salesman with a reworked title.  The Hardware Architect’s role is to point out the flaws in everyone else’s architecture so that they can justify why the extra hardware expense is not their fault.  At Hardware Architect School, The Hardware Architect is trained in creating proprietary hardware platforms that create vendor lock-in.

How to spot The Hardware Vendor Architect

The Hardware Vendor Architect normally has a car full of pens, mouse mats and notepads emblazoned with some well-known brand which they use to assimilate the weak.  They also have huge expense accounts where they can take the entire data centre to lunch occasionally.  They are often heard saying things like ‘You need a 24×7 99.999999% disaster recovery site’

The Auditor Architect

We are not sure of the origins of The Auditor Architect, because they are supposed to be auditing things, not creating architectures.  The Auditor Architect will always propose an architecture that uses spreadsheets for every possible system interface that requires each user to be a CA so that they can review the transactions before they are submitted (not to be confused with The Auditor Project Manager who uses spreadsheets for all documentation).  Since most organizations don’t have that many CA’s, The Auditor Architect represents a firm that can provide as many CA’s as may be necessary.

How to spot The Auditor Architect

The Auditor Architect always wears a black suit, white shirt and an expensive tie in the latest fashionable colour and style.  The Auditor Architect will often go to great lengths to express that they are unbiased and just want to make sure that things are done correctly.  Most emails received from The Auditor Architect have spreadsheet attachments.

The Gartner Architect

The Gartner Architect has knows all the buzzwords and has all the supporting documentation.  They never actually put together a workable architecture but run ongoing workshops on the likelihood of the architecture looking a particular way at some point in the next six months to five years.  As soon as an architecture is established, The Gartner Architect uncovers some ‘new research’ that requires a suspension of the project while the architecture is re-evaluated.  Incidentally, sometimes The Gartner Architect is known as The Meta Architect.

How to spot The Gartner Architect

The Gartner Architect always does presentations with references to some research noted on every slide and the true test of The Gartner Architect is asking for the document that is being referred to – it won’t materialize.  The Gartner Architect is often accompanied by a harem of PowerPoint Architects eager to get their hands on the material.  The Gartner Architect is often entertained by The Hardware Architect, provided that they represent products that are in ‘The Magic Quadrant’.

The ERP Vendor Architect

True Architects for ERP systems do exist – but they hang out somewhere else, like in Germany, and not on your particular project.  There is no need for an architect on a system that if changed, self destructs within thirty seconds.  The ERP Vendor Architect is actually an implementation project assistant that is billed at a high rate.

How to spot The ERP Vendor Architect

The ERP Vendor Architect almost always has a branded leather folder of some really fun training conference that they went to in some exotic location with thousands of other ERP Vendor Architects.  A dead giveaway is if The ERP Vendor Architect and The Hardware Architect are exchanging corporate gift goodies – a sure sign that they are colluding do blame legacy systems for the poor performance.

The UML Architect

The UML Architect is not interested in any architecture that cannot be depicted using UML diagrams and spend a considerable amount of effort making sure that this happens.  The UML Architect lives in an object bubble and has no consideration that their intended audience never learned SmallTalk.

How to spot The UML Architect

The UML Architect is easy to spot from the documents that they produce.  All documents have a lot of stick-men, hang-men and and cartoon characters pointing at bubbles.  The UML Architect will always be able to describe the architecture by <<stereotyping>> it as something that you will understand.

The Beta Architect

The Beta Architect insists that the current version of whatever software you are using is going to be ridiculously out of date by the time the system goes live.  For that reason it is important that the development be done with the beta framework, operating system or development environment and not to worry, the product will be probably released before the system needs to go into production.

How to spot The Beta Architect

The Beta Architect normally wears a golf short with a large software vendors logo embroidered on the front and walks around with a conference bag suitably branded.  The Beta Architect normally comes from an external organization that has a partnership with a large vendor indicated by some metal, but always gold or platinum – bronze and silver partners are not worthy.

Simon Munro

@simonmunro

Something from my Moleskine

Something from my Moleskine

More posts from me

I do most of my short format blogging on CloudComments.net. So head over there for more current blog posts on cloud computing

RSS Posts on CloudComments.net

  • Free eBook on Designing Cloud Applications
    Too often we see cloud project fail, not because of the platforms or lack of enthusiasm, but from a general lack of skills on cloud computing principles and architectures. At the beginning of last year I looked at how to address this problem and realised that some guidance was needed on what is different with […]
  • AWS and high performance commodity
    One of the primary influencers on cloud application architectures is the lack of high performance infrastructure — particularly infrastructure that satisfies the I/O demands of databases. Databases running on public cloud infrastructure have never had access to the custom-build high I/O infrastructure of their on-premise counterparts. This had led to the wel […]
  • Fingers should be pointed at AWS
    The recent outage suffered at Amazon Web Services due to the failure of something-or-other caused by storms in Virginia has created yet another round of discussions about availability in the public cloud. Update: The report from AWS on the cause and ramifications of the outage is here. While there has been some of the usual […]
  • Microsoft can do it without partners
    Microsoft’s biggest strength has always its partner network and it seemed, at least for a couple of decades, that a strong channel was needed to get your product into the market. Few remember the days where buyers only saw products in computer magazines, computer trade shows and the salespeople walking through the door — the […]
  • The significance of Linux VMs on Windows Azure
    One of the most significant, highly anticipated, and worst kept secrets of the Windows Azure spring release is the inclusion of persistent VMs, with the notable addition of support for Linux on those VMs. The significance of the feature is not that high architecturally — after all, Windows Azure applications that were specifically architected for […]

@simonmunro

Follow

Get every new post delivered to your Inbox.