![]() | My humble attempt to keep track and analyze the trends and possibilities in the Cloud Computing market. |
I've had some discussions on the greenness of Cloud Computing the last few weeks. Most people consider the Cloud to be Green IT, but there are skeptics. Their main argument against the greenness of the Cloud is usually componentisaton.
Componentisaton allows for ever greater speeds of evolution and innovation. It leads to more consumption thus you might have more efficient units but you'll end up consuming vastly more of them.
This might seem logical, but if you ask the inevitable question - what are the alternatives - most people I talk to come up shorthanded. We are inevitably in a maelstrom of ever increasing usage of computational units, cloud or no cloud, and as far as I can understand, the underlying business models of Cloud Computing of providing the most efficient and cost-effective platform for computational units, this really has to be greener than the waste we are seeing on desktops and in-house server farms.
New, and novel forms of consumption always arise with componentisaton they say
The first insight into this are seen today in the CI/test environment scenarios. Where we used to have a sparse set of environments on which to make do, most Cloud companies and projects do not post any limitation of how they utilize virtualization of such environments to beef-up productivity and quality. If we do a side-by-side greenness comparison today, the cloud win by large margins, but at some point this picture might reverse.
So, while I still believe the cloud to be green, we must watch the Cloud business models closely as they evolve to ensure that the cost of Cloud computation does not get to low to measure, which will lead to massive waste which will kill the idea of green clouds.
While the marked are starting to get their grips on terminology and categorization of Cloud Computing, Gartner seem to be lost in their own world. Failing to separate Key characteristics, delivery models (Details here) and deployment models and leveling the field into five attributes, Gartner tries to push us back to Ground Zero
*A much better approach to see how strongly a cloud solution (or service) adheres to the cloud computing model would be to discuss the combination's in the figure above. *
The five attributes of cloud computing are:
Service-Based: Consumer concerns are abstracted from provider concerns through service interfaces that are well-defined. The interfaces hide the implementation details and enable a completely automated response by the provider of the service to the consumer of the service. The service could be considered "ready to use" or "off the shelf" because the service is designed to serve the specific needs of a set of consumers, and the technologies are tailored to that need rather than the service being tailored to how the technology works. The articulation of the service feature is based on service levels and IT outcomes (availability, response time, performance versus price, and clear and predefined operational processes), rather than technology and its capabilities. In other words, what the service needs to do is more important than how the technologies are used to implement the solution.
Scalable and Elastic: The service can scale capacity up or down as the consumer demands at the speed of full automation (which may be seconds for some services and hours for others). Elasticity is a trait of shared pools of resources. Scalability is a feature of the underlying infrastructure and software platforms. Elasticity is associated with not only scale but also an economic model that enables scaling in both directions in an automated fashion. This means that services scale on demand to add or remove resources as needed.
Shared: Services share a pool of resources to build economies of scale. IT resources are used with maximum efficiency. The underlying infrastructure, software or platforms are shared among the consumers of the service (usually unknown to the consumers). This enables unused resources to serve multiple needs for multiple consumers, all working at the same time.
Metered by Use: Services are tracked with usage metrics to enable multiple payment models. The service provider has a usage accounting model for measuring the use of the services, which could then be used to create different pricing plans and models. These may include pay-as-you go plans, subscriptions, fixed plans and even free plans. The implied payment plans will be based on usage, not on the cost of the equipment. These plans are based on the amount of the service used by the consumers, which may be in terms of hours, data transfers or other use-based attributes delivered.
Uses Internet Technologies: The service is delivered using Internet identifiers, formats and protocols, such as URLs, HTTP, IP and representational state transfer Web-oriented architecture. Many examples of Web technology exist as the foundation for Internet-based services. Google's Gmail, Amazon.com's book buying, eBay's auctions and Lolcats' picture sharing all exhibit the use of Internet and Web technologies and protocols.
Good blog-post from a quite interesting twitter-discussion on cloud technology
If a cloud still links you to hardware failure then its a great compute grid but it still requires you at a fundamental level to worry about the hardware whether that be CPU or disk. A cloud should virtualise that meaning that only software failures are the issue, the hardware has been truly virtualised and scaled.
I like the clean separation of just provisioning. Apps do the rest, enterprises won't like it tho
And here we are at a key finding. People (here enterprises) are still looking for silver bullets which IMHO makes no sense whatsoever from a technological point of view. from a philosophical or political or cultural point of view, it is easy to see this effect in practise. So we are left to take a stand: should we "go with the flow" or should we try harder and educate to ensure that meaningful decisions are made?
I'm always chocked by comments like this:
So let's start with the nonsensical idea that your cloud infrastructure is secure.
It is 2009, and the whole idea of any secure infrastructure is just totally naive..
Your cloud infrastructure is likely secured with respect to some requirements, but it's just as certain to be insecure for other requirements. The cloud has nothing to do with this. Every infrastructure on the planet is secure for some requirements and insecure for others.
![]() | If your cloud provider refuses to answer any specific question about their security architecture related to your security requirements, run—don't walk—away from that vendor. |
Good blog-post on an old-trick:
- The missing piece - middleware virtualization
If you take pretty much any existing application, add another machine into the network, and watch what happens, you wouldn't be surprised that nothing much would happen at all.
Non-virtualized middleware
Today's applications aren't able to dynamically take advantage of new computing resources that become available. That's also true for cloud based environments like EC2 - the fact that you can now add machines easily is nice, but it doesn't mean that the application can do anything with it. What is missing is a layer that helps the application take advantage of these new resources dynamically, as they are being added to the system. This is where middleware virtualization comes to the rescue.
VMWorld VMware is introducing a new vSphere architecture and management product to manage a data centre as an internal or external cloud of services.
The idea, introduced by VMware CEO Paul Maritz at VMworld in Cannes, is to have a set of interfaces looking downwards at the IT plumbing and another set looking at the applications.There is a vCompute interface to look at the compute resources and see what is available and provision them. A vStorage layer gets told by storage resources - the arrays for example - what they can do, such as block copy or deduplication. Then vSphere admin staff, and ultimately users themselves, can provision storage resources.
Ref: http://www.theregister.co.uk/2009/02/24/vmware-vsphere/
Thousands of Twitter messages carrying the words "gmail" or "gfail" will teach you that Google's free web-based e-mail platform is currently down. A Google spokesperson told Pocket Lint that their engineers are working on it but have no clue why the errors are turning up.
Meanwhile, Google posted this on a discussion form:
We're aware of a problem with Gmail affecting a small subset of users. The affected users are unable to access Gmail. We will provide an update by February 24, 2009 6:30 AM PST detailing when we expect to resolve the problem. Please note that this resolution time is an estimate and may change.
(POP3 / IMAP seems to be still functioning, and the problem doesn't appear to affect Google Apps at this point)
I'm not buying the small subset part, and considering the fact that Pocket Lint says the problem started occuring around 10:20am GMT, 3 hours before even telling everyone what's going on is an incredibly long timeframe in my opinion.
Ref: http://www.techcrunch.com/2009/02/24/trouble-in-the-clouds-gmail-turns-into-gfail/
The cloud computing market is in a period of excitement, growth and high potential, but will still require several years and many changes in the market before cloud computing — or service-enabled application platforms (SEAPs) — is a mainstream IT effort, according to Gartner, Inc.
Gartner said that technologically aggressive application development organizations should look to cloud computing for tactical projects through 2011, during which time the market will begin to mature and be dominated by a select group of vendors.
Referanse: http://www.gartner.com/it/page.jsp?id=871113
Cloud definitions are vague.
That is - how can we expect IT people to be able to strategize and decide on IT direction and tactics if we can't even describe for them what the real issues are in any consistent way. For that, we need a commonly accepted definition, even if it is not great.
![]() | Let's at least ask the experts to start their definitions with actual definitions. |
Reference: http://blogs.gartner.com/daryl_plummer/
StratusLab
StratusLab is an informal collaboration between CNRS/LAL, GRNET, SixSq Sàrl, and UCM. The collaboration is open to anyone who would like to participate. The collaboration focuses on cloud technologies and how those technologies can be used productively in research and commercial environments.
The key issue with productive use of the technologies is effective management of the cloud resources. For broad adoption, cloud resources must be managable with the same (or similar) techniques currently used by administrators of data centers. The initial activities of the collaboration will investigate how different management techniques can be adapted to cloud resources.
Referanse: http://www.stratuslab.org/wiki/index.php/Main_Page
EUCALYPTUS - Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems
EUCALYPTUS - Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems - is an open-source software infrastructure for implementing "cloud computing" on clusters. The current interface to EUCALYPTUS is compatible with Amazon's EC2 interface, but the infrastructure is designed to support multiple client-side interfaces. EUCALYPTUS is implemented using commonly available Linux tools and basic Web-service technologies making it easy to install and maintain.
The current release is version 1.3 and it includes the following features:
- Interface compatibility with EC2 (both Web service and Query interfaces)
- Simple installation and deployment using Rocks cluster-management tools
- Stand-alone RPMs for non-Rocks RPM based systems
- Secure internal communication using SOAP with WS-security
- Overlay functionality requiring no modification to the target Linux environment
- Basic "Cloud Administrator" tools for system management and user accounting
- The ability to configure multiple clusters, each with private internal network addresses, into a single Cloud.
Referanse: http://eucalyptus.cs.ucsb.edu/
Platform components as a service will hit the software marked hard in 2009 and 2010, but until developers and architects understand how to leverage the platform components in a clear and consistant way, they will add more pain than salvation... Read up on some of the architecture axioms and distributed architectures and analyse your current design and architectures before moving to platform component services is adviced.
Referanse: http://www.cloudmq.com/
Cloud computing platforms offer many benefits including:
- Cheaper operational costs.
- Dynamic scaling in response to load spikes.
- Roll-on, roll-off deployments for e.g. newspaper archive processing.
These platforms exist as the result of the investment of companies such as Amazon, Google and Microsoft in developing cost-effective infrastructure with system to administrator ratios of 2500:1 (whilst the average enterprise manages around 150:1 and inefficient properties manage maybe 10:1).
Key to allowing these infrastructures to be efficient and in turn deliver the benefits above is having applications architected such that:
- They don't require masses of administrator intervention when they go wrong.
- They can be installed with minimal administrator effort because there's no need to worry about tweaking URLs, IP addresses, database connections etc.
- They readily support horizontal scaling e.g. because they contain an abstraction that can support sharding of data-storage.
In essence an application must be designed for zero administrator intervention and fully automated deployment. It should also have a variable workload component that magnifies the savings of the architectural properties above.
Strange then that many a developer expects to move their existing application, full of enterprise DNA (static configuration, vertical clusters, no horizontal scaling, high administration costs) to such an offering with minimal change. They even complain when it proves difficult because all those "enterprise features" aren't present. Why does this happen?
I believe it's because these developers have fundamentally misunderstood how cloud computing delivers its benefits. They see the cheap prices but don't stop to consider where the cost saving comes from. Some of it is achieved by cloud platform vendors getting large discounts on huge hardware orders but a significant proportion comes from the fact that they don't need to provide (via human resources or APIs) the sysadmin functions required for conventional hosting solutions.
Quite simply typical applications, their architectures and associated administration practices are not setup for cloud platforms. Some of them may be able to run on these platforms with sufficient hackery, brute force and associated cost. However if the motivation for a move to the cloud is merely to reduce kit costs one might well be better off looking for a cheaper conventional hosting solution.
In summary, making the best of the cloud requires that we take an architectural view, something that we've proven remarkably bad at over and over. Simply deploying an application unchanged to the cloud is unlikely to deliver much benefit.
Referanse: http://www.dancres.org/blitzblog/2009/01/25/cutting-corners/
According to Gartner, the important criteria for a cloud development platform include:
Reference: |
|
Here are the three criteria I have for determining whether something is a cloud service or not:
1. The service is accessible via a web browser (non-proprietary) or web services API.
2. There is zero capital expenditure required to get started.
3. You pay only for what you use as you use it.
For "at least" the next six months, Amazon will provide a certain amount of free cloud sitting. Every month, SimpleDB users will receive 25 machine hours, 1GB of data transfers in, 1GB of transfers out, and 1GB of storage at no charge. Once those freebies are exhausted, you'll pay $0.14 per machine hour, $0.10 per GB in, $0.10 to $0.17 per GB out, and $0.25 per GB of storage. Storage pricing is down 83 per cent from the cloud's limited beta.
You can also transfer an unlimited amount of data from EC2 at no charge.
Let's say you are running multiple different Amazon Machine Images (AMIs), which contain your applications, libraries, data and configuration settings on Amazon's Elastic Compute Cloud (EC2), and you are using Amazon's S3 for storage. Don't think running everything in the cloud will abstract away potential management problems. You'll still have a system administration headache until you script something or update your AMIs with your new software and application code.
A better - and obvious - answer would be if you could have all of your images, code and applications available in a dashboard where you could simply update everything on the fly.
An even better answer would be to not have to perform any system administration functions at all. Currently, the only way to make that happen with Amazon is through third-party tools like 3Tera and |RightScale|http://www.rightscale.com/].
Microsoft, offers a way to host your .Net applications on the Cloud with a pricing model yet to be officially announced, and offers integration with some Microsoft services/applications.
Amazon on the other end, does not offer a way to host your web applications out-of-the-box on the Cloud, but simply provide virtualized hardware on which you can do whatever you'd like to (well, as usual it's it a bit more complex than that, but that's pretty much it).
So basically, Google and Microsoft offers are PaaS solutions: they offer a Platform on which you can deploy your applications. On the other end, Amazon offers an IaaS solution: an Infrastructure which you can use.
PaaS or IaaS: do you want the Cloud Provider to offer you a way to host your applications (if you can accomadate with their technical restrictions) or an infrastructure allowing you to host your applications (without restrictions) the way you want?
Amazon CloudFront delivers your content using a global network of edge locations. Requests for your objects are automatically routed to the nearest edge location, so content is delivered with the best possible performance.
geek footnote: the bigass images on dopplr's new city pages are served from Amazon's Cloudfront CDN. And it was really easy.
Very happy about Amazon CloudFront, about 100-120 msec for static files like images, js, etc... in Spain (going to France). Faster than S3
- 1. Licensing - If you stay Open Source you are OK... Commercial Licences may kill your budget..
- 2. Persistence - Application persistence.. Amazon does not like filesystems, use DB for persistence...
- 3. Horizontal Scalability. Not sure that this gets the point.. vertical scalabillity should work even better on Amazone. Anyone?
- 4. Disaster Recovery. Its a distributed system - cope and be happy
Reference: http://broadcast.oreilly.com/2008/10/considerations-in-building-web.html