Cloud computing is nothing but utilizing supercomputer power to perform complicated tasks over the Internet. (I understood this much thanks to Business Week.
The article says “Cloud computing aims to apply that kind of power—measured in the tens of trillions of computations per second—to problems like analyzing risk in financial portfolios, delivering personalized medical information, even powering immersive computer games, in a way that users can tap through the Web. It does that by networking large groups of servers that often use low-cost consumer PC technology, with specialized connections to spread data-processing chores across them.”
Express Computer throws more light by saying that “The key feature of cloud computing is that both the software and the information held in it live on centrally located servers rather than on a end-user’s computer.” “…The architecture behind cloud computing is a massive network of ‘cloud servers’ interconnected as if in a grid running in parallel, sometimes using the technique of virtualization to maximize compute power per server.”
“Clouds will become dynamic components of enterprise and research grids, adding an "external" dimension of business flexibility by enhancing their home capacity whenever needed, on demand,” writes Wolfgang Gentzsch in GridToday. He maintains that Grid computing will stay as they will become more cloud like. That, he says, looks promising.
Infoworld explains cloud computing with examples such as SaaS (software as a service), Utility computing, MSP (managed service providers), and Service commerce platforms.
What about security of data and the data centers? “SWIFT, a bank-transfer consortium, has announced plans to build a data centre in neutral Switzerland, so that data collected in Europe will not be stored in an American facility, where it could be subpoenaed by the United States government,” writes The Economist.
Links for Cloud Computing:
Wikipedia
New to grid computing - IBM
Friday, July 18, 2008
An introduction to Cloud Computing
Labels:
computing,
data,
grid computing,
information,
server farms,
storage,
virtualization
Subscribe to:
Post Comments (Atom)
Newspaper front pages - June 5
Some images of front pages of newspapers after votes were counted on June 4, 2024 after a ridiculously long parliament elections. Did the ...
-
http://www.grain.org/front/ Friends of Earth International Food Policy Research Institute (IFPRI GeneWatch UK International Rivers Friends o...
-
In the 70s and 80s, school principals used to instruct students to read English newspapers such as The Hindu for improving English skills. B...
-
It is the hope of a better home that drives migration, said acclaimed poet and writer, Ruth Padel, in an evening session on Poetry, Nature...
4 comments:
not quite...but this is a good start to the definition of cloud computing.
Cloud computing is about dynamic provisioning of logical partitions - LPAR's, preconfigured software stacks, etc. The 'cloud', to the end-user, appears as an 'infinite' amount of computing partitions. Metering, workload management, and hardware virtualization enable these logical partitions to share physical hardware resources.
Grid Computing is about the coordinated execution of business tasks across a collection of resources. So the grid would sit on top the cloud infrastructure.
This is from another post of mine on the topic:
Clouds and Grids are complements, not supplements. More over, they solve very different problems and should really be treated as such. Unfortunately though, Cloud/Grid/Utility are terms that have become very overloaded :).
Cloud Computing is about the dynamic provisioning of logical partitions (LPAR's) and leveraging utility-computing technologies like metering and workload management to provide chargeback and goals-oriented execution for the physical resources consumed by those LPAR's. So Amazon EC2 for example can quickly and cheaply provision new OS images (LPAR's) on which applications can run. The LPAR's don't care about the applications, what they do, what data they need to access, etc.
Grid Computing is about the coordinated execution of some complex task across a collection of resources. For example, protein folding is some complex task which could be broken into discrete units of work, and each unit of work could be executed concurrently across a cluster of servers. The grid application infrastructure has the burden of creating the partitions of work and providing operational management (start, stop, dispatching, results aggregration, etc) of those discrete chunks.
Within grid computing there are sub-categories: compute grids and data grids. Compute Grids are responsible for breaking large tasks into discrete chunks, executing some computations on them, and aggregating the results. Data Grids are about partitioning data across a collection of resources for scalability and higher performance. You should see Compute Grids and Data Grids working together to provide a high-performance, scalable processing infrastructure. You can read more about building high performance grids at: http://www-128.ibm.com/developerworks/websphere/techjournal/0804_antani/0804_antani.html. For an example architecture where compute grids and data grids are working together, see this specific section: http://www-128.ibm.com/developerworks/websphere/techjournal/0804_antani/0804_antani.html#xdegc
So to summarize: Clouds are about the dynamic provisioning of LPAR's and leveraging metering and WLM technologies to manage and charge for those logical partitions. Grid computing is an application infrastructure and programming style that allows complex tasks to be broken into smaller pieces and executed across a collection of resources. Grid Computing is about executing business logic quickly, Cloud Computing is about provisioning infrastructure. The grid computing infrastructure would run on top of a cloud computing infrastructure.
This isn't all that new. Big Iron hardware, their operating systems, and the middleware stack have been doing this type of work for decades. The difference today is that Amazon EC2 can quickly and cheaply provision new LPAR's whereas LPARs in big iron machines are statically defined. Both leverage hardware virtualization (executing on shared hardware resources), OS virtualization via a VM/hypervisor, have some application container for executing the business logic per some QoS requirements(application servers, etc), and leverage some type of workload management mechanism for metering and goals-oriented execution.
There is nothing wrong with statically defined LPAR's as long as we have very smart workload management, good hypervisors, and hardware virtualization technologies (See System Z). Dynamic provisioning of resources within the datacenter already exists, see Tivoli Provisioning Manager and other such technologies. The future will probably be a more integrated and cohesive hardware and software stack for "private clouds" and "enterprise grids" (the 'enterprise' in "enterprise grids" implies a high level of QoS like security, resource management, transactions, etc are expected).
I also discuss this in another blog post: http://www-128.ibm.com/developerworks/forums/thread.jspa?threadID=214794&tstart=0
Thanks for the details.
Post a Comment