 Cisco’s newly released Global Cloud Index estimates that annual global cloud traffic will grow from 130 exabytes to 1.6 zettabytes by 2015. The kicker, though, is that only 17 percent of that 1.6 zettabytes—equivalent to 22 trillion hours of online video*—actually goes from the cloud to end users via services like video streaming or web surfing. The vast majority (76 percent) of the data that the cloud shuffles around is internal to the datacenter, and the remaining 7 percent of traffic is datacenters communicating with each other.
Cisco’s newly released Global Cloud Index estimates that annual global cloud traffic will grow from 130 exabytes to 1.6 zettabytes by 2015. The kicker, though, is that only 17 percent of that 1.6 zettabytes—equivalent to 22 trillion hours of online video*—actually goes from the cloud to end users via services like video streaming or web surfing. The vast majority (76 percent) of the data that the cloud shuffles around is internal to the datacenter, and the remaining 7 percent of traffic is datacenters communicating with each other.
The report suggests that this lopsided ratio of internal to external traffic will hold steady through 2015, and it blames virtualization for this. But I wonder if some emerging trends in IaaS might not tip the balance much further toward outbound traffic by cutting out a lot of legacy-related internal traffic.
Containers and the post-OS cloud
Cisco gives a number of reasons why the cloud datacenter’s internal traffic far outpaces its out-bound traffic, most of which have to do with the particulars of the way that the datacenter divides up basic compute and storage functions. Specifically, datacenters separate application servers from both database (SQL and NoSQL) and non-database (block, blob, cache, backing store, etc.) storage, so data is constantly traversing the internal network for everything from backup jobs to virtualization-based load balancing and failover.
It’s no surprise, then, that the report claims that the increasing adoption of virtualization is part of what’s driving this trend.
The ratio of traffic exiting the data center to traffic remaining within the data center might be expected to increase over time, because video files are bandwidth-heavy and do not require database or processing traffic commensurate with their file size. However, the ongoing virtualization of data centers offsets this trend. Virtualization of storage, for example, increases traffic within the data center because virtualized storage
is no longer local to a rack or server.
What’s interesting about this is that this is that in moving storage away from the compute node and consolidating it in a physically, datacenter architects are just moving traffic from local, board-level buses to high-powered switches; I wonder though, how long can this go on?
These switches are expensive and power-hungry, and for certain types of jobs (non-latency-bound, batch workloads, like Hadoop) the ideal configuration is to have chunks of storage tightly bound to the compute nodes. Now, the vagaries of magnetic spinning disks mean that it makes more sense to isolate all of that failure-prone, mechanical hardware into one unit and send storage traffic over the network, but when everything goes solid-state, will we see storage move back to the compute nodes?
In addition to a move of more storage back to the compute nodes themselves, I wonder if the emerging post-virtualization, post-OS cloud infrastructure will cut down on a lot of this traffic. If and when we eventually get away from virtualization and from the necessity of wrapping an every single workload in full-blown, bloated OS image, this will massively reduce the amount of data that gets shuffled around. Think about it. If apps are wrapped in lightweight, modular containers, then there will be a ton of OS cruft that won’t have to get moved through a switch for load balancing, failover, or even booting. Already it’s the case that modern image storage platforms can compress the amount of space that images take up by some 90 percent by simply not storing redundant OS data (this technique is called deduplication). Now imagine if that redundant data never had to move through a switch. That would be a massive amount of internal bandwidth saved, and it woud definitely change the ratio of internal to external datacenter traffic.
Of course, there are two possible reasons why Cisco wouldn’t raise the same points that I’ve raised above: 1) I’m wrong, or 2) Cisco is keen on making the case that the datacenter’s appetite for its switching hardware is going to grow, not shrink. I’m betting that the answer is #2.
Image: Flickr/catdancing
Have any news tips, or just want to send me feedback? You can reach me at jon underscore stokes at wired.com. I’m also on Twitter as @jonst0kes, and on Google+.
Authors:
 Le principe Noemi concept
		    			Le principe Noemi concept			   
			 Astuces informatiques
		    			Astuces informatiques			   
			 Webbuzz & Tech info
		    			Webbuzz & Tech info			   
			 Noemi météo
		    			Noemi météo			   
			 Notions de Météo
		    			Notions de Météo			   
			 Animation satellite
		    			Animation satellite			   
			 Mesure du taux radiation
		    			Mesure du taux radiation			   
			 NC Communication & Design
		    			NC Communication & Design			   
			 News Département Com
		    			News Département Com			   
			 Portfolio
		    			Portfolio			   
			 NC Print et Event
		    			NC Print et Event			   
			 NC Video
		    			NC Video			   
			 Le département Edition
		    			Le département Edition			   
			 Les coups de coeur de Noemi
		    			Les coups de coeur de Noemi			   
			 News Grande Région
		    			News Grande Région			   
			 News Finance France
		    			News Finance France			   
			 Glance.lu
		    			Glance.lu			   
			








